Test Report: KVM_Linux_crio 21978

                    
                      c78c82fa8bc5e05550c6fccb0bebb9cb966c725e:2025-11-24:42489
                    
                

Test fail (5/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 156.75
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 368.67
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 602.59
345 TestPreload 153.08
381 TestPause/serial/SecondStartNoReconfiguration 415.86
x
+
TestAddons/parallel/Ingress (156.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-076740 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-076740 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-076740 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [03578caf-2a1a-4d02-b25e-2d01e414376a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [03578caf-2a1a-4d02-b25e-2d01e414376a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004328553s
I1124 08:32:33.482364    9629 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-076740 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.532056195s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-076740 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.17
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-076740 -n addons-076740
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 logs -n 25: (1.149067602s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-538093                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-538093 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-974725 --alsologtostderr --binary-mirror http://127.0.0.1:40623 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-974725 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-974725                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-974725 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-076740                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-076740                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-076740 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:31 UTC │
	│ addons  │ addons-076740 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │ 24 Nov 25 08:31 UTC │
	│ addons  │ addons-076740 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:31 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ enable headlamp -p addons-076740 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ ip      │ addons-076740 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-076740                                                                                                                                                                                                                                                                                                                                                                                         │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ ssh     │ addons-076740 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-076740 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ ssh     │ addons-076740 ssh cat /opt/local-path-provisioner/pvc-600f5cd9-f262-49bb-b127-38831b9747e0_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:32 UTC │
	│ addons  │ addons-076740 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:32 UTC │ 24 Nov 25 08:33 UTC │
	│ ip      │ addons-076740 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-076740        │ jenkins │ v1.37.0 │ 24 Nov 25 08:34 UTC │ 24 Nov 25 08:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:29:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:29:30.188622   10629 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:29:30.188904   10629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:30.188915   10629 out.go:374] Setting ErrFile to fd 2...
	I1124 08:29:30.188922   10629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:30.189106   10629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:29:30.189680   10629 out.go:368] Setting JSON to false
	I1124 08:29:30.190514   10629 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":706,"bootTime":1763972264,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:29:30.190572   10629 start.go:143] virtualization: kvm guest
	I1124 08:29:30.192817   10629 out.go:179] * [addons-076740] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:29:30.194388   10629 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:29:30.194392   10629 notify.go:221] Checking for updates...
	I1124 08:29:30.197151   10629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:29:30.198449   10629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:29:30.199801   10629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:29:30.201446   10629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:29:30.202902   10629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:29:30.204404   10629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:29:30.236833   10629 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 08:29:30.238216   10629 start.go:309] selected driver: kvm2
	I1124 08:29:30.238240   10629 start.go:927] validating driver "kvm2" against <nil>
	I1124 08:29:30.238251   10629 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:29:30.239271   10629 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:29:30.239574   10629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 08:29:30.239609   10629 cni.go:84] Creating CNI manager for ""
	I1124 08:29:30.239671   10629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 08:29:30.239682   10629 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 08:29:30.239729   10629 start.go:353] cluster config:
	{Name:addons-076740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-076740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1124 08:29:30.239892   10629 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:30.242215   10629 out.go:179] * Starting "addons-076740" primary control-plane node in "addons-076740" cluster
	I1124 08:29:30.243431   10629 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:30.243464   10629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 08:29:30.243471   10629 cache.go:65] Caching tarball of preloaded images
	I1124 08:29:30.243575   10629 preload.go:238] Found /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 08:29:30.243589   10629 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 08:29:30.243917   10629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/config.json ...
	I1124 08:29:30.243941   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/config.json: {Name:mk23725bdf7eb07208f5c7bcde1b5e6b93ebd75f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:30.244115   10629 start.go:360] acquireMachinesLock for addons-076740: {Name:mk7b5988e566cc8ac324d849b09ff116b4f24553 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 08:29:30.244195   10629 start.go:364] duration metric: took 63.456µs to acquireMachinesLock for "addons-076740"
	I1124 08:29:30.244228   10629 start.go:93] Provisioning new machine with config: &{Name:addons-076740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-076740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 08:29:30.244287   10629 start.go:125] createHost starting for "" (driver="kvm2")
	I1124 08:29:30.245927   10629 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1124 08:29:30.246068   10629 start.go:159] libmachine.API.Create for "addons-076740" (driver="kvm2")
	I1124 08:29:30.246097   10629 client.go:173] LocalClient.Create starting
	I1124 08:29:30.246254   10629 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem
	I1124 08:29:30.340369   10629 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem
	I1124 08:29:30.450775   10629 main.go:143] libmachine: creating domain...
	I1124 08:29:30.450795   10629 main.go:143] libmachine: creating network...
	I1124 08:29:30.452149   10629 main.go:143] libmachine: found existing default network
	I1124 08:29:30.452395   10629 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 08:29:30.452944   10629 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d267f0}
	I1124 08:29:30.453037   10629 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-076740</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 08:29:30.459235   10629 main.go:143] libmachine: creating private network mk-addons-076740 192.168.39.0/24...
	I1124 08:29:30.531037   10629 main.go:143] libmachine: private network mk-addons-076740 192.168.39.0/24 created
	I1124 08:29:30.531330   10629 main.go:143] libmachine: <network>
	  <name>mk-addons-076740</name>
	  <uuid>11d62327-ea2d-4a4c-b3f0-c954a4363a5d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:a6:60:d1'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 08:29:30.531370   10629 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740 ...
	I1124 08:29:30.531412   10629 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21978-5665/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1124 08:29:30.531426   10629 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:29:30.531548   10629 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21978-5665/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21978-5665/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1124 08:29:30.812940   10629 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa...
	I1124 08:29:30.976042   10629 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/addons-076740.rawdisk...
	I1124 08:29:30.976091   10629 main.go:143] libmachine: Writing magic tar header
	I1124 08:29:30.976136   10629 main.go:143] libmachine: Writing SSH key tar header
	I1124 08:29:30.976260   10629 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740 ...
	I1124 08:29:30.976349   10629 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740
	I1124 08:29:30.976391   10629 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740 (perms=drwx------)
	I1124 08:29:30.976417   10629 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21978-5665/.minikube/machines
	I1124 08:29:30.976435   10629 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21978-5665/.minikube/machines (perms=drwxr-xr-x)
	I1124 08:29:30.976466   10629 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:29:30.976484   10629 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21978-5665/.minikube (perms=drwxr-xr-x)
	I1124 08:29:30.976499   10629 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21978-5665
	I1124 08:29:30.976514   10629 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21978-5665 (perms=drwxrwxr-x)
	I1124 08:29:30.976542   10629 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1124 08:29:30.976559   10629 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1124 08:29:30.976577   10629 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1124 08:29:30.976590   10629 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1124 08:29:30.976608   10629 main.go:143] libmachine: checking permissions on dir: /home
	I1124 08:29:30.976619   10629 main.go:143] libmachine: skipping /home - not owner
	I1124 08:29:30.976629   10629 main.go:143] libmachine: defining domain...
	I1124 08:29:30.977993   10629 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-076740</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/addons-076740.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-076740'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1124 08:29:30.985784   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:98:29:26 in network default
	I1124 08:29:30.987347   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:30.987381   10629 main.go:143] libmachine: starting domain...
	I1124 08:29:30.987388   10629 main.go:143] libmachine: ensuring networks are active...
	I1124 08:29:30.988288   10629 main.go:143] libmachine: Ensuring network default is active
	I1124 08:29:30.988683   10629 main.go:143] libmachine: Ensuring network mk-addons-076740 is active
	I1124 08:29:30.989317   10629 main.go:143] libmachine: getting domain XML...
	I1124 08:29:30.990361   10629 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-076740</name>
	  <uuid>bcb70932-92ae-4333-adfc-d5e85155dbf4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/addons-076740.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:66:23:0e'/>
	      <source network='mk-addons-076740'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:98:29:26'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 08:29:32.291771   10629 main.go:143] libmachine: waiting for domain to start...
	I1124 08:29:32.293007   10629 main.go:143] libmachine: domain is now running
	I1124 08:29:32.293022   10629 main.go:143] libmachine: waiting for IP...
	I1124 08:29:32.293767   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:32.294445   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:32.294462   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:32.294765   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:32.294813   10629 retry.go:31] will retry after 306.761254ms: waiting for domain to come up
	I1124 08:29:32.603632   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:32.604144   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:32.604178   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:32.604471   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:32.604508   10629 retry.go:31] will retry after 242.770498ms: waiting for domain to come up
	I1124 08:29:32.849062   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:32.849595   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:32.849612   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:32.849940   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:32.849976   10629 retry.go:31] will retry after 333.332017ms: waiting for domain to come up
	I1124 08:29:33.184416   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:33.184907   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:33.184920   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:33.185211   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:33.185266   10629 retry.go:31] will retry after 477.549045ms: waiting for domain to come up
	I1124 08:29:33.664194   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:33.664924   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:33.664941   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:33.665312   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:33.665350   10629 retry.go:31] will retry after 531.610287ms: waiting for domain to come up
	I1124 08:29:34.197988   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:34.198535   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:34.198553   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:34.198835   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:34.198871   10629 retry.go:31] will retry after 948.81824ms: waiting for domain to come up
	I1124 08:29:35.148963   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:35.149521   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:35.149539   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:35.149790   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:35.149827   10629 retry.go:31] will retry after 1.010790476s: waiting for domain to come up
	I1124 08:29:36.163004   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:36.163548   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:36.163563   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:36.163845   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:36.163877   10629 retry.go:31] will retry after 1.130166547s: waiting for domain to come up
	I1124 08:29:37.296106   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:37.296720   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:37.296739   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:37.297022   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:37.297069   10629 retry.go:31] will retry after 1.768665058s: waiting for domain to come up
	I1124 08:29:39.068123   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:39.068672   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:39.068688   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:39.068986   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:39.069018   10629 retry.go:31] will retry after 1.471285301s: waiting for domain to come up
	I1124 08:29:40.542452   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:40.543253   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:40.543271   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:40.543614   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:40.543643   10629 retry.go:31] will retry after 2.506661101s: waiting for domain to come up
	I1124 08:29:43.053342   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:43.053957   10629 main.go:143] libmachine: no network interface addresses found for domain addons-076740 (source=lease)
	I1124 08:29:43.053972   10629 main.go:143] libmachine: trying to list again with source=arp
	I1124 08:29:43.054280   10629 main.go:143] libmachine: unable to find current IP address of domain addons-076740 in network mk-addons-076740 (interfaces detected: [])
	I1124 08:29:43.054316   10629 retry.go:31] will retry after 2.977730876s: waiting for domain to come up
	I1124 08:29:46.033803   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.034395   10629 main.go:143] libmachine: domain addons-076740 has current primary IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.034411   10629 main.go:143] libmachine: found domain IP: 192.168.39.17
	I1124 08:29:46.034420   10629 main.go:143] libmachine: reserving static IP address...
	I1124 08:29:46.034862   10629 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-076740", mac: "52:54:00:66:23:0e", ip: "192.168.39.17"} in network mk-addons-076740
	I1124 08:29:46.237249   10629 main.go:143] libmachine: reserved static IP address 192.168.39.17 for domain addons-076740
	I1124 08:29:46.237271   10629 main.go:143] libmachine: waiting for SSH...
	I1124 08:29:46.237277   10629 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 08:29:46.241076   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.241672   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.241699   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.241943   10629 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:46.242168   10629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1124 08:29:46.242180   10629 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 08:29:46.366972   10629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 08:29:46.367403   10629 main.go:143] libmachine: domain creation complete
	I1124 08:29:46.369085   10629 machine.go:94] provisionDockerMachine start ...
	I1124 08:29:46.371686   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.372073   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.372099   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.372284   10629 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:46.372534   10629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1124 08:29:46.372549   10629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 08:29:46.484767   10629 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 08:29:46.484799   10629 buildroot.go:166] provisioning hostname "addons-076740"
	I1124 08:29:46.487898   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.488493   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.488538   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.488732   10629 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:46.489013   10629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1124 08:29:46.489030   10629 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-076740 && echo "addons-076740" | sudo tee /etc/hostname
	I1124 08:29:46.620055   10629 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-076740
	
	I1124 08:29:46.623184   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.623580   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.623623   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.623785   10629 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:46.623963   10629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1124 08:29:46.623977   10629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-076740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-076740/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-076740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 08:29:46.749292   10629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 08:29:46.749323   10629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 08:29:46.749372   10629 buildroot.go:174] setting up certificates
	I1124 08:29:46.749396   10629 provision.go:84] configureAuth start
	I1124 08:29:46.752352   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.752770   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.752796   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.755399   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.755777   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.755804   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.755958   10629 provision.go:143] copyHostCerts
	I1124 08:29:46.756042   10629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 08:29:46.756214   10629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 08:29:46.756290   10629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 08:29:46.756340   10629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.addons-076740 san=[127.0.0.1 192.168.39.17 addons-076740 localhost minikube]
	I1124 08:29:46.846387   10629 provision.go:177] copyRemoteCerts
	I1124 08:29:46.846441   10629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 08:29:46.849182   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.849586   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:46.849610   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:46.849780   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:29:46.937877   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 08:29:46.967136   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 08:29:46.995792   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 08:29:47.024853   10629 provision.go:87] duration metric: took 275.440053ms to configureAuth
	I1124 08:29:47.024885   10629 buildroot.go:189] setting minikube options for container-runtime
	I1124 08:29:47.025075   10629 config.go:182] Loaded profile config "addons-076740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:29:47.027897   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.028310   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.028339   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.028504   10629 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:47.028772   10629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1124 08:29:47.028795   10629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 08:29:47.305989   10629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 08:29:47.306015   10629 machine.go:97] duration metric: took 936.9137ms to provisionDockerMachine
	I1124 08:29:47.306028   10629 client.go:176] duration metric: took 17.059920409s to LocalClient.Create
	I1124 08:29:47.306042   10629 start.go:167] duration metric: took 17.059971907s to libmachine.API.Create "addons-076740"
	I1124 08:29:47.306051   10629 start.go:293] postStartSetup for "addons-076740" (driver="kvm2")
	I1124 08:29:47.306063   10629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 08:29:47.306137   10629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 08:29:47.309147   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.309684   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.309720   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.309875   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:29:47.401719   10629 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 08:29:47.406522   10629 info.go:137] Remote host: Buildroot 2025.02
	I1124 08:29:47.406550   10629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 08:29:47.406623   10629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 08:29:47.406647   10629 start.go:296] duration metric: took 100.58967ms for postStartSetup
	I1124 08:29:47.409698   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.410094   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.410113   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.410345   10629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/config.json ...
	I1124 08:29:47.410554   10629 start.go:128] duration metric: took 17.16625737s to createHost
	I1124 08:29:47.412590   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.412994   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.413020   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.413214   10629 main.go:143] libmachine: Using SSH client type: native
	I1124 08:29:47.413407   10629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I1124 08:29:47.413417   10629 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 08:29:47.526516   10629 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763972987.481737750
	
	I1124 08:29:47.526544   10629 fix.go:216] guest clock: 1763972987.481737750
	I1124 08:29:47.526553   10629 fix.go:229] Guest: 2025-11-24 08:29:47.48173775 +0000 UTC Remote: 2025-11-24 08:29:47.410565101 +0000 UTC m=+17.269325895 (delta=71.172649ms)
	I1124 08:29:47.526578   10629 fix.go:200] guest clock delta is within tolerance: 71.172649ms
	I1124 08:29:47.526587   10629 start.go:83] releasing machines lock for "addons-076740", held for 17.282376167s
	I1124 08:29:47.529393   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.529761   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.529780   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.530275   10629 ssh_runner.go:195] Run: cat /version.json
	I1124 08:29:47.530368   10629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 08:29:47.533382   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.533551   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.533792   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.533818   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.533939   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:47.533950   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:29:47.533965   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:47.534180   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:29:47.618627   10629 ssh_runner.go:195] Run: systemctl --version
	I1124 08:29:47.658896   10629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 08:29:47.820194   10629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 08:29:47.826997   10629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 08:29:47.827066   10629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 08:29:47.846599   10629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 08:29:47.846631   10629 start.go:496] detecting cgroup driver to use...
	I1124 08:29:47.846700   10629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 08:29:47.867239   10629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 08:29:47.884402   10629 docker.go:218] disabling cri-docker service (if available) ...
	I1124 08:29:47.884471   10629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 08:29:47.901632   10629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 08:29:47.917718   10629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 08:29:48.064412   10629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 08:29:48.284351   10629 docker.go:234] disabling docker service ...
	I1124 08:29:48.284413   10629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 08:29:48.301104   10629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 08:29:48.316949   10629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 08:29:48.475450   10629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 08:29:48.623919   10629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 08:29:48.639748   10629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 08:29:48.664065   10629 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/linux/amd64/v1.34.2/kubeadm
	I1124 08:29:50.192456   10629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 08:29:50.192525   10629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.205538   10629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 08:29:50.205614   10629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.218185   10629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.230664   10629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.242902   10629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 08:29:50.256181   10629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.268173   10629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.289378   10629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 08:29:50.302294   10629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 08:29:50.313314   10629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 08:29:50.313380   10629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 08:29:50.333351   10629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 08:29:50.345101   10629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:50.487978   10629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 08:29:50.596241   10629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 08:29:50.596347   10629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 08:29:50.601983   10629 start.go:564] Will wait 60s for crictl version
	I1124 08:29:50.602061   10629 ssh_runner.go:195] Run: which crictl
	I1124 08:29:50.606364   10629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 08:29:50.643101   10629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 08:29:50.643229   10629 ssh_runner.go:195] Run: crio --version
	I1124 08:29:50.673100   10629 ssh_runner.go:195] Run: crio --version
	I1124 08:29:50.704185   10629 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1124 08:29:50.708416   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:50.708828   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:29:50.708856   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:29:50.709048   10629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 08:29:50.713877   10629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 08:29:50.728605   10629 kubeadm.go:884] updating cluster {Name:addons-076740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-076740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 08:29:50.728779   10629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:51.006624   10629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:51.287253   10629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:51.586549   10629 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:51.586690   10629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:51.863959   10629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:52.144334   10629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:52.425680   10629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 08:29:52.456199   10629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1124 08:29:52.456289   10629 ssh_runner.go:195] Run: which lz4
	I1124 08:29:52.460585   10629 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 08:29:52.465062   10629 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 08:29:52.465101   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1124 08:29:53.735075   10629 crio.go:462] duration metric: took 1.274536387s to copy over tarball
	I1124 08:29:53.735172   10629 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 08:29:55.195909   10629 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.46070225s)
	I1124 08:29:55.195936   10629 crio.go:469] duration metric: took 1.460835276s to extract the tarball
	I1124 08:29:55.195943   10629 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 08:29:55.232388   10629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 08:29:55.270503   10629 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 08:29:55.270529   10629 cache_images.go:86] Images are preloaded, skipping loading
	I1124 08:29:55.270538   10629 kubeadm.go:935] updating node { 192.168.39.17 8443 v1.34.2 crio true true} ...
	I1124 08:29:55.270617   10629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-076740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-076740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 08:29:55.270677   10629 ssh_runner.go:195] Run: crio config
	I1124 08:29:55.319139   10629 cni.go:84] Creating CNI manager for ""
	I1124 08:29:55.319176   10629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 08:29:55.319194   10629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 08:29:55.319215   10629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-076740 NodeName:addons-076740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 08:29:55.319323   10629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-076740"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.17"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 08:29:55.319390   10629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 08:29:55.331436   10629 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 08:29:55.331517   10629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 08:29:55.343319   10629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1124 08:29:55.363747   10629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 08:29:55.383983   10629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 08:29:55.406461   10629 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I1124 08:29:55.411267   10629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 08:29:55.426372   10629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:29:55.569591   10629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 08:29:55.609509   10629 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740 for IP: 192.168.39.17
	I1124 08:29:55.609536   10629 certs.go:195] generating shared ca certs ...
	I1124 08:29:55.609575   10629 certs.go:227] acquiring lock for ca certs: {Name:mkc847d4fb6fb61872e24a1bb00356ff9ef1a409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.609730   10629 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key
	I1124 08:29:55.664384   10629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt ...
	I1124 08:29:55.664416   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt: {Name:mk21cef8471035fac9ab2e8d6fd9c99a56201669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.664583   10629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key ...
	I1124 08:29:55.664594   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key: {Name:mk54ea2a44ed134ce07aff4cdaea55756013827d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.664664   10629 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key
	I1124 08:29:55.764855   10629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt ...
	I1124 08:29:55.764888   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt: {Name:mkb9ee5db64e817e18c51e963ab14668e20f46ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.765058   10629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key ...
	I1124 08:29:55.765071   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key: {Name:mk4701b962a9e31ef5382367d25f49faed066a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.765135   10629 certs.go:257] generating profile certs ...
	I1124 08:29:55.765210   10629 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.key
	I1124 08:29:55.765231   10629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt with IP's: []
	I1124 08:29:55.869993   10629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt ...
	I1124 08:29:55.870023   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: {Name:mk414d2c3201a4f21f9a304dcc1a2e4dfaf6406a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.870199   10629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.key ...
	I1124 08:29:55.870213   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.key: {Name:mk92f953d8d6fc9e0adb197d66e23578ce46aac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:55.870284   10629 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.key.4c753167
	I1124 08:29:55.870301   10629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.crt.4c753167 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17]
	I1124 08:29:56.007929   10629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.crt.4c753167 ...
	I1124 08:29:56.007959   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.crt.4c753167: {Name:mk718dff70e0eaae40c5220fa194cb8da6c1a9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:56.008121   10629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.key.4c753167 ...
	I1124 08:29:56.008136   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.key.4c753167: {Name:mk4af007f88aceb0c1c6c7b59947a5578f69da23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:56.008240   10629 certs.go:382] copying /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.crt.4c753167 -> /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.crt
	I1124 08:29:56.008316   10629 certs.go:386] copying /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.key.4c753167 -> /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.key
	I1124 08:29:56.008363   10629 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.key
	I1124 08:29:56.008381   10629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.crt with IP's: []
	I1124 08:29:56.050691   10629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.crt ...
	I1124 08:29:56.050729   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.crt: {Name:mk8e050f50f88f8985b09cc1a308d1b7be90b42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:56.050932   10629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.key ...
	I1124 08:29:56.050949   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.key: {Name:mk6c5d8175caae918f1908cdc55bc3b7c013286b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:56.051182   10629 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 08:29:56.051230   10629 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem (1078 bytes)
	I1124 08:29:56.051269   10629 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem (1123 bytes)
	I1124 08:29:56.051298   10629 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem (1675 bytes)
	I1124 08:29:56.051877   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 08:29:56.083358   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 08:29:56.116373   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 08:29:56.146658   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 08:29:56.184134   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 08:29:56.218018   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 08:29:56.249602   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 08:29:56.285105   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 08:29:56.332446   10629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 08:29:56.369094   10629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 08:29:56.390636   10629 ssh_runner.go:195] Run: openssl version
	I1124 08:29:56.397362   10629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 08:29:56.411027   10629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:56.416342   10629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:56.416401   10629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 08:29:56.423909   10629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 08:29:56.437625   10629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 08:29:56.442732   10629 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 08:29:56.442801   10629 kubeadm.go:401] StartCluster: {Name:addons-076740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-076740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:56.442868   10629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 08:29:56.442930   10629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 08:29:56.477641   10629 cri.go:89] found id: ""
	I1124 08:29:56.477707   10629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 08:29:56.490121   10629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 08:29:56.502458   10629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 08:29:56.514657   10629 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 08:29:56.514679   10629 kubeadm.go:158] found existing configuration files:
	
	I1124 08:29:56.514721   10629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 08:29:56.526280   10629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 08:29:56.526351   10629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 08:29:56.537999   10629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 08:29:56.549213   10629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 08:29:56.549295   10629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 08:29:56.561806   10629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 08:29:56.573510   10629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 08:29:56.573579   10629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 08:29:56.585466   10629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 08:29:56.597117   10629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 08:29:56.597215   10629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 08:29:56.609930   10629 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 08:29:56.764447   10629 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 08:30:09.416865   10629 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 08:30:09.416948   10629 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 08:30:09.417044   10629 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 08:30:09.417202   10629 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 08:30:09.417340   10629 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 08:30:09.417426   10629 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 08:30:09.419323   10629 out.go:252]   - Generating certificates and keys ...
	I1124 08:30:09.419397   10629 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 08:30:09.419448   10629 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 08:30:09.419513   10629 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 08:30:09.419560   10629 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 08:30:09.419609   10629 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 08:30:09.419650   10629 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 08:30:09.419709   10629 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 08:30:09.419900   10629 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-076740 localhost] and IPs [192.168.39.17 127.0.0.1 ::1]
	I1124 08:30:09.419991   10629 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 08:30:09.420139   10629 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-076740 localhost] and IPs [192.168.39.17 127.0.0.1 ::1]
	I1124 08:30:09.420256   10629 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 08:30:09.420344   10629 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 08:30:09.420422   10629 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 08:30:09.420508   10629 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 08:30:09.420574   10629 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 08:30:09.420646   10629 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 08:30:09.420689   10629 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 08:30:09.420750   10629 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 08:30:09.420799   10629 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 08:30:09.420909   10629 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 08:30:09.421013   10629 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 08:30:09.422425   10629 out.go:252]   - Booting up control plane ...
	I1124 08:30:09.422524   10629 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 08:30:09.422619   10629 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 08:30:09.422683   10629 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 08:30:09.422771   10629 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 08:30:09.422881   10629 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 08:30:09.423004   10629 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 08:30:09.423073   10629 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 08:30:09.423109   10629 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 08:30:09.423253   10629 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 08:30:09.423343   10629 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 08:30:09.423392   10629 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001684485s
	I1124 08:30:09.423463   10629 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 08:30:09.423562   10629 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.17:8443/livez
	I1124 08:30:09.423674   10629 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 08:30:09.423784   10629 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 08:30:09.423857   10629 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.00465775s
	I1124 08:30:09.423917   10629 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.979714324s
	I1124 08:30:09.423992   10629 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001527306s
	I1124 08:30:09.424081   10629 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 08:30:09.424212   10629 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 08:30:09.424275   10629 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 08:30:09.424433   10629 kubeadm.go:319] [mark-control-plane] Marking the node addons-076740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 08:30:09.424525   10629 kubeadm.go:319] [bootstrap-token] Using token: vrxgln.ewqqzg5ccy0lty9a
	I1124 08:30:09.426347   10629 out.go:252]   - Configuring RBAC rules ...
	I1124 08:30:09.426459   10629 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 08:30:09.426567   10629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 08:30:09.426718   10629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 08:30:09.426836   10629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 08:30:09.426930   10629 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 08:30:09.427003   10629 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 08:30:09.427134   10629 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 08:30:09.427203   10629 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 08:30:09.427241   10629 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 08:30:09.427247   10629 kubeadm.go:319] 
	I1124 08:30:09.427293   10629 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 08:30:09.427298   10629 kubeadm.go:319] 
	I1124 08:30:09.427364   10629 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 08:30:09.427372   10629 kubeadm.go:319] 
	I1124 08:30:09.427391   10629 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 08:30:09.427451   10629 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 08:30:09.427501   10629 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 08:30:09.427509   10629 kubeadm.go:319] 
	I1124 08:30:09.427551   10629 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 08:30:09.427557   10629 kubeadm.go:319] 
	I1124 08:30:09.427599   10629 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 08:30:09.427620   10629 kubeadm.go:319] 
	I1124 08:30:09.427676   10629 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 08:30:09.427743   10629 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 08:30:09.427807   10629 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 08:30:09.427813   10629 kubeadm.go:319] 
	I1124 08:30:09.427883   10629 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 08:30:09.427944   10629 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 08:30:09.427954   10629 kubeadm.go:319] 
	I1124 08:30:09.428018   10629 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vrxgln.ewqqzg5ccy0lty9a \
	I1124 08:30:09.428144   10629 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa \
	I1124 08:30:09.428201   10629 kubeadm.go:319] 	--control-plane 
	I1124 08:30:09.428211   10629 kubeadm.go:319] 
	I1124 08:30:09.428316   10629 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 08:30:09.428324   10629 kubeadm.go:319] 
	I1124 08:30:09.428426   10629 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vrxgln.ewqqzg5ccy0lty9a \
	I1124 08:30:09.428577   10629 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa 
	I1124 08:30:09.428592   10629 cni.go:84] Creating CNI manager for ""
	I1124 08:30:09.428599   10629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 08:30:09.430153   10629 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 08:30:09.431523   10629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 08:30:09.448329   10629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 08:30:09.470258   10629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 08:30:09.470338   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:09.470416   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-076740 minikube.k8s.io/updated_at=2025_11_24T08_30_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=addons-076740 minikube.k8s.io/primary=true
	I1124 08:30:09.597287   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:09.661913   10629 ops.go:34] apiserver oom_adj: -16
	I1124 08:30:10.098027   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:10.597783   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:11.098136   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:11.597842   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:12.098185   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:12.597997   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:13.098325   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:13.598055   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:14.097555   10629 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 08:30:14.262795   10629 kubeadm.go:1114] duration metric: took 4.792524644s to wait for elevateKubeSystemPrivileges
	I1124 08:30:14.262835   10629 kubeadm.go:403] duration metric: took 17.820038707s to StartCluster
	I1124 08:30:14.262856   10629 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:30:14.263012   10629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:30:14.263465   10629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:30:14.263725   10629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 08:30:14.263750   10629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 08:30:14.263815   10629 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 08:30:14.263948   10629 addons.go:70] Setting yakd=true in profile "addons-076740"
	I1124 08:30:14.263963   10629 addons.go:70] Setting inspektor-gadget=true in profile "addons-076740"
	I1124 08:30:14.263972   10629 addons.go:70] Setting registry-creds=true in profile "addons-076740"
	I1124 08:30:14.263987   10629 addons.go:70] Setting storage-provisioner=true in profile "addons-076740"
	I1124 08:30:14.263994   10629 addons.go:239] Setting addon inspektor-gadget=true in "addons-076740"
	I1124 08:30:14.263998   10629 config.go:182] Loaded profile config "addons-076740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:30:14.264006   10629 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-076740"
	I1124 08:30:14.264008   10629 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-076740"
	I1124 08:30:14.264017   10629 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-076740"
	I1124 08:30:14.264023   10629 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-076740"
	I1124 08:30:14.264002   10629 addons.go:70] Setting metrics-server=true in profile "addons-076740"
	I1124 08:30:14.264038   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.264044   10629 addons.go:70] Setting registry=true in profile "addons-076740"
	I1124 08:30:14.264046   10629 addons.go:239] Setting addon metrics-server=true in "addons-076740"
	I1124 08:30:14.264058   10629 addons.go:239] Setting addon registry=true in "addons-076740"
	I1124 08:30:14.264072   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.264085   10629 addons.go:70] Setting gcp-auth=true in profile "addons-076740"
	I1124 08:30:14.264101   10629 mustload.go:66] Loading cluster: addons-076740
	I1124 08:30:14.264277   10629 config.go:182] Loaded profile config "addons-076740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:30:14.264401   10629 addons.go:70] Setting cloud-spanner=true in profile "addons-076740"
	I1124 08:30:14.264429   10629 addons.go:239] Setting addon cloud-spanner=true in "addons-076740"
	I1124 08:30:14.264455   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.264681   10629 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-076740"
	I1124 08:30:14.264704   10629 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-076740"
	I1124 08:30:14.264719   10629 addons.go:70] Setting ingress-dns=true in profile "addons-076740"
	I1124 08:30:14.264732   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.264736   10629 addons.go:239] Setting addon ingress-dns=true in "addons-076740"
	I1124 08:30:14.264771   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.265199   10629 addons.go:70] Setting volcano=true in profile "addons-076740"
	I1124 08:30:14.263995   10629 addons.go:239] Setting addon registry-creds=true in "addons-076740"
	I1124 08:30:14.264040   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.263978   10629 addons.go:239] Setting addon yakd=true in "addons-076740"
	I1124 08:30:14.265239   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.265289   10629 addons.go:70] Setting ingress=true in profile "addons-076740"
	I1124 08:30:14.265307   10629 addons.go:239] Setting addon ingress=true in "addons-076740"
	I1124 08:30:14.265329   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.265564   10629 addons.go:239] Setting addon volcano=true in "addons-076740"
	I1124 08:30:14.265605   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.264075   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.265842   10629 addons.go:70] Setting volumesnapshots=true in profile "addons-076740"
	I1124 08:30:14.265865   10629 addons.go:239] Setting addon volumesnapshots=true in "addons-076740"
	I1124 08:30:14.265889   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.265970   10629 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-076740"
	I1124 08:30:14.266029   10629 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-076740"
	I1124 08:30:14.266051   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.266363   10629 addons.go:70] Setting default-storageclass=true in profile "addons-076740"
	I1124 08:30:14.266395   10629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-076740"
	I1124 08:30:14.266642   10629 out.go:179] * Verifying Kubernetes components...
	I1124 08:30:14.266850   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.263999   10629 addons.go:239] Setting addon storage-provisioner=true in "addons-076740"
	I1124 08:30:14.266902   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.268252   10629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 08:30:14.270126   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.271887   10629 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-076740"
	I1124 08:30:14.271927   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.272548   10629 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 08:30:14.273490   10629 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	W1124 08:30:14.273533   10629 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 08:30:14.273534   10629 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 08:30:14.273546   10629 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 08:30:14.274299   10629 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 08:30:14.274305   10629 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 08:30:14.275112   10629 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 08:30:14.275150   10629 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 08:30:14.275636   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 08:30:14.275701   10629 addons.go:239] Setting addon default-storageclass=true in "addons-076740"
	I1124 08:30:14.274349   10629 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 08:30:14.276229   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 08:30:14.276567   10629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 08:30:14.275744   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:14.275831   10629 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 08:30:14.277068   10629 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 08:30:14.275858   10629 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 08:30:14.277230   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 08:30:14.277374   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 08:30:14.276600   10629 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 08:30:14.276626   10629 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 08:30:14.277957   10629 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 08:30:14.278266   10629 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 08:30:14.278301   10629 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 08:30:14.278664   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 08:30:14.278267   10629 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 08:30:14.278268   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 08:30:14.278319   10629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 08:30:14.279338   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 08:30:14.278339   10629 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 08:30:14.278986   10629 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 08:30:14.278994   10629 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 08:30:14.279660   10629 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 08:30:14.279858   10629 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 08:30:14.280183   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 08:30:14.280777   10629 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 08:30:14.280797   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 08:30:14.281753   10629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 08:30:14.281769   10629 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:30:14.281774   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 08:30:14.281835   10629 out.go:179]   - Using image docker.io/busybox:stable
	I1124 08:30:14.281771   10629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 08:30:14.281888   10629 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 08:30:14.281908   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 08:30:14.283567   10629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 08:30:14.283588   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 08:30:14.284325   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.284735   10629 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:30:14.284745   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 08:30:14.285784   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.285811   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.285816   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.286633   10629 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 08:30:14.286649   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 08:30:14.286783   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.287207   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.287619   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.287651   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.287822   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.287967   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 08:30:14.288293   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.288873   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.288926   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.288955   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.289868   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.289895   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.289909   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.290577   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.290916   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.290948   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.291010   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 08:30:14.291061   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.291601   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.291980   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.292008   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.292306   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.292549   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.292597   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.292852   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.293276   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.293308   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.293315   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.293374   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.293398   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.293741   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 08:30:14.293784   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.293940   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.294097   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.294193   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.294221   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.294320   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.294358   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.294443   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.294467   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.294727   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.294991   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.295026   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.295046   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.295288   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.295479   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.295522   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.295650   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.295676   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.295811   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.295978   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.296058   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.296406   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.296432   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.296602   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:14.296771   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 08:30:14.298097   10629 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 08:30:14.299292   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 08:30:14.299309   10629 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 08:30:14.302283   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.302863   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:14.302898   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:14.303122   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	W1124 08:30:14.749012   10629 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54610->192.168.39.17:22: read: connection reset by peer
	I1124 08:30:14.749044   10629 retry.go:31] will retry after 188.94026ms: ssh: handshake failed: read tcp 192.168.39.1:54610->192.168.39.17:22: read: connection reset by peer
	I1124 08:30:15.132482   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 08:30:15.192539   10629 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 08:30:15.192570   10629 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 08:30:15.285330   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 08:30:15.321562   10629 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 08:30:15.321581   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 08:30:15.347312   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 08:30:15.354487   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 08:30:15.366310   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 08:30:15.402945   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 08:30:15.497545   10629 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 08:30:15.497572   10629 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 08:30:15.509142   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 08:30:15.547875   10629 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 08:30:15.547904   10629 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 08:30:15.681975   10629 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.418216616s)
	I1124 08:30:15.682020   10629 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.413738192s)
	I1124 08:30:15.682082   10629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 08:30:15.682205   10629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 08:30:15.742517   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 08:30:15.814451   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 08:30:15.840474   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 08:30:15.840498   10629 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 08:30:15.843704   10629 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 08:30:15.843726   10629 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 08:30:15.872735   10629 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 08:30:15.872758   10629 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 08:30:15.931597   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 08:30:16.057852   10629 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 08:30:16.057880   10629 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 08:30:16.154627   10629 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 08:30:16.154661   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 08:30:16.264981   10629 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 08:30:16.265008   10629 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 08:30:16.306569   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 08:30:16.306597   10629 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 08:30:16.323306   10629 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 08:30:16.323337   10629 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 08:30:16.483786   10629 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 08:30:16.483808   10629 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 08:30:16.511915   10629 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 08:30:16.511938   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 08:30:16.551008   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 08:30:16.633964   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 08:30:16.633999   10629 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 08:30:16.640413   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 08:30:16.821828   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 08:30:16.821862   10629 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 08:30:16.824932   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 08:30:17.015054   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 08:30:17.015089   10629 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 08:30:17.191130   10629 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:30:17.191177   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 08:30:17.269665   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.137145625s)
	I1124 08:30:17.414601   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.129236982s)
	I1124 08:30:17.414671   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067325938s)
	I1124 08:30:17.435514   10629 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 08:30:17.435541   10629 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 08:30:17.708446   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:30:17.774260   10629 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 08:30:17.774293   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 08:30:18.236029   10629 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 08:30:18.236083   10629 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 08:30:18.555872   10629 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 08:30:18.555898   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 08:30:18.676638   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.322110738s)
	I1124 08:30:19.003628   10629 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 08:30:19.003659   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 08:30:19.082818   10629 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 08:30:19.082842   10629 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 08:30:19.368965   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 08:30:20.886553   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.520198042s)
	I1124 08:30:20.886641   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.483672933s)
	I1124 08:30:20.886711   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.377533796s)
	I1124 08:30:20.886773   10629 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.204536845s)
	I1124 08:30:20.886800   10629 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1124 08:30:20.886831   10629 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.204707277s)
	I1124 08:30:20.886858   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.144304868s)
	I1124 08:30:20.887656   10629 node_ready.go:35] waiting up to 6m0s for node "addons-076740" to be "Ready" ...
	I1124 08:30:21.031003   10629 node_ready.go:49] node "addons-076740" is "Ready"
	I1124 08:30:21.031047   10629 node_ready.go:38] duration metric: took 143.365015ms for node "addons-076740" to be "Ready" ...
	I1124 08:30:21.031065   10629 api_server.go:52] waiting for apiserver process to appear ...
	I1124 08:30:21.031129   10629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 08:30:21.527726   10629 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-076740" context rescaled to 1 replicas
	I1124 08:30:21.724830   10629 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 08:30:21.728004   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:21.728525   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:21.728553   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:21.728722   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:21.942552   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.128066012s)
	I1124 08:30:22.204261   10629 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 08:30:22.402088   10629 addons.go:239] Setting addon gcp-auth=true in "addons-076740"
	I1124 08:30:22.402169   10629 host.go:66] Checking if "addons-076740" exists ...
	I1124 08:30:22.404412   10629 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 08:30:22.407480   10629 main.go:143] libmachine: domain addons-076740 has defined MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:22.408045   10629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:23:0e", ip: ""} in network mk-addons-076740: {Iface:virbr1 ExpiryTime:2025-11-24 09:29:46 +0000 UTC Type:0 Mac:52:54:00:66:23:0e Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:addons-076740 Clientid:01:52:54:00:66:23:0e}
	I1124 08:30:22.408087   10629 main.go:143] libmachine: domain addons-076740 has defined IP address 192.168.39.17 and MAC address 52:54:00:66:23:0e in network mk-addons-076740
	I1124 08:30:22.408344   10629 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/addons-076740/id_rsa Username:docker}
	I1124 08:30:22.707786   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.776157812s)
	I1124 08:30:22.707820   10629 addons.go:495] Verifying addon ingress=true in "addons-076740"
	I1124 08:30:22.707979   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.067514591s)
	I1124 08:30:22.707900   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.156854134s)
	I1124 08:30:22.708009   10629 addons.go:495] Verifying addon metrics-server=true in "addons-076740"
	I1124 08:30:22.708025   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.88305777s)
	I1124 08:30:22.708025   10629 addons.go:495] Verifying addon registry=true in "addons-076740"
	I1124 08:30:22.709578   10629 out.go:179] * Verifying registry addon...
	I1124 08:30:22.709578   10629 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-076740 service yakd-dashboard -n yakd-dashboard
	
	I1124 08:30:22.709583   10629 out.go:179] * Verifying ingress addon...
	I1124 08:30:22.711407   10629 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 08:30:22.712692   10629 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 08:30:22.792604   10629 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 08:30:22.792625   10629 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 08:30:22.792649   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:22.792626   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.226048   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.226127   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:23.288460   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.579965937s)
	W1124 08:30:23.288529   10629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 08:30:23.288556   10629 retry.go:31] will retry after 178.004089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 08:30:23.467569   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 08:30:23.734402   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:23.737815   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.226212   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:24.238917   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.436086   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.067074838s)
	I1124 08:30:24.436116   10629 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.404963909s)
	I1124 08:30:24.436128   10629 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-076740"
	I1124 08:30:24.436144   10629 api_server.go:72] duration metric: took 10.17235624s to wait for apiserver process to appear ...
	I1124 08:30:24.436152   10629 api_server.go:88] waiting for apiserver healthz status ...
	I1124 08:30:24.436171   10629 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.031716981s)
	I1124 08:30:24.436184   10629 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I1124 08:30:24.437897   10629 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 08:30:24.437899   10629 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 08:30:24.439312   10629 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 08:30:24.440136   10629 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 08:30:24.440495   10629 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 08:30:24.440517   10629 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 08:30:24.452395   10629 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I1124 08:30:24.458004   10629 api_server.go:141] control plane version: v1.34.2
	I1124 08:30:24.458042   10629 api_server.go:131] duration metric: took 21.881665ms to wait for apiserver health ...
	I1124 08:30:24.458055   10629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 08:30:24.462130   10629 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 08:30:24.462156   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:24.476830   10629 system_pods.go:59] 20 kube-system pods found
	I1124 08:30:24.476882   10629 system_pods.go:61] "amd-gpu-device-plugin-d9h4q" [531de084-465c-4122-81af-f1cb7db6f953] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:24.476896   10629 system_pods.go:61] "coredns-66bc5c9577-hf96x" [3eb81f21-11fb-43aa-aed2-d7ec1a7bf527] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:24.476909   10629 system_pods.go:61] "coredns-66bc5c9577-tgs6q" [b7de76da-9e9b-4c39-8d98-d36664df8be1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:24.476917   10629 system_pods.go:61] "csi-hostpath-attacher-0" [01bdefaf-63d2-405c-9357-17ffb6073004] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:24.476924   10629 system_pods.go:61] "csi-hostpath-resizer-0" [b56139e8-adf5-4665-b38a-c762e876ba59] Pending
	I1124 08:30:24.476937   10629 system_pods.go:61] "csi-hostpathplugin-ldm95" [7d5b93e9-ae85-4e9c-9163-e1f0e3e649f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:24.476949   10629 system_pods.go:61] "etcd-addons-076740" [0cf45511-0738-4ecc-9adf-7dc14bf5c4e5] Running
	I1124 08:30:24.476956   10629 system_pods.go:61] "kube-apiserver-addons-076740" [10b8fcbe-8f03-4f4d-ab1a-06f4e5030816] Running
	I1124 08:30:24.476961   10629 system_pods.go:61] "kube-controller-manager-addons-076740" [2947863c-b529-46b7-a640-9f6bfcc5e037] Running
	I1124 08:30:24.476978   10629 system_pods.go:61] "kube-ingress-dns-minikube" [328ed36c-6854-4cad-96d8-b7bf66c40615] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:24.476983   10629 system_pods.go:61] "kube-proxy-xp75h" [99affeb8-7846-448d-af12-b3b421836507] Running
	I1124 08:30:24.476990   10629 system_pods.go:61] "kube-scheduler-addons-076740" [1b5bbd5e-fe68-4e0f-883d-9af7b890eab2] Running
	I1124 08:30:24.477006   10629 system_pods.go:61] "metrics-server-85b7d694d7-vqcv6" [ac18afbb-29f5-4ebe-a88d-e67822460468] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:24.477019   10629 system_pods.go:61] "nvidia-device-plugin-daemonset-c78hg" [4920cb6b-84d5-44d0-96e9-8aac9b76c9e0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:24.477029   10629 system_pods.go:61] "registry-6b586f9694-bbszd" [f5bd4f8d-32c3-4226-9f47-38d07eaa1ddd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:24.477043   10629 system_pods.go:61] "registry-creds-764b6fb674-ccb9n" [ee76931b-df2e-474f-a982-9bdd6bc44522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:24.477052   10629 system_pods.go:61] "registry-proxy-24jnp" [d30eb954-1f06-43b1-98ff-136a4042942e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:24.477065   10629 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2rhhr" [2e7749cc-71ec-4279-a498-7e2827a7feff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:24.477075   10629 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pcdsj" [5d2439a5-ddb4-443b-babd-bf95b689ccb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:24.477082   10629 system_pods.go:61] "storage-provisioner" [c836f22f-4939-46e3-96ff-077c5057f334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 08:30:24.477091   10629 system_pods.go:74] duration metric: took 19.029212ms to wait for pod list to return data ...
	I1124 08:30:24.477104   10629 default_sa.go:34] waiting for default service account to be created ...
	I1124 08:30:24.494069   10629 default_sa.go:45] found service account: "default"
	I1124 08:30:24.494098   10629 default_sa.go:55] duration metric: took 16.98638ms for default service account to be created ...
	I1124 08:30:24.494115   10629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 08:30:24.513393   10629 system_pods.go:86] 20 kube-system pods found
	I1124 08:30:24.513426   10629 system_pods.go:89] "amd-gpu-device-plugin-d9h4q" [531de084-465c-4122-81af-f1cb7db6f953] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 08:30:24.513435   10629 system_pods.go:89] "coredns-66bc5c9577-hf96x" [3eb81f21-11fb-43aa-aed2-d7ec1a7bf527] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:24.513465   10629 system_pods.go:89] "coredns-66bc5c9577-tgs6q" [b7de76da-9e9b-4c39-8d98-d36664df8be1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 08:30:24.513477   10629 system_pods.go:89] "csi-hostpath-attacher-0" [01bdefaf-63d2-405c-9357-17ffb6073004] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 08:30:24.513482   10629 system_pods.go:89] "csi-hostpath-resizer-0" [b56139e8-adf5-4665-b38a-c762e876ba59] Pending
	I1124 08:30:24.513489   10629 system_pods.go:89] "csi-hostpathplugin-ldm95" [7d5b93e9-ae85-4e9c-9163-e1f0e3e649f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 08:30:24.513493   10629 system_pods.go:89] "etcd-addons-076740" [0cf45511-0738-4ecc-9adf-7dc14bf5c4e5] Running
	I1124 08:30:24.513497   10629 system_pods.go:89] "kube-apiserver-addons-076740" [10b8fcbe-8f03-4f4d-ab1a-06f4e5030816] Running
	I1124 08:30:24.513500   10629 system_pods.go:89] "kube-controller-manager-addons-076740" [2947863c-b529-46b7-a640-9f6bfcc5e037] Running
	I1124 08:30:24.513506   10629 system_pods.go:89] "kube-ingress-dns-minikube" [328ed36c-6854-4cad-96d8-b7bf66c40615] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 08:30:24.513511   10629 system_pods.go:89] "kube-proxy-xp75h" [99affeb8-7846-448d-af12-b3b421836507] Running
	I1124 08:30:24.513515   10629 system_pods.go:89] "kube-scheduler-addons-076740" [1b5bbd5e-fe68-4e0f-883d-9af7b890eab2] Running
	I1124 08:30:24.513531   10629 system_pods.go:89] "metrics-server-85b7d694d7-vqcv6" [ac18afbb-29f5-4ebe-a88d-e67822460468] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 08:30:24.513542   10629 system_pods.go:89] "nvidia-device-plugin-daemonset-c78hg" [4920cb6b-84d5-44d0-96e9-8aac9b76c9e0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 08:30:24.513547   10629 system_pods.go:89] "registry-6b586f9694-bbszd" [f5bd4f8d-32c3-4226-9f47-38d07eaa1ddd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 08:30:24.513552   10629 system_pods.go:89] "registry-creds-764b6fb674-ccb9n" [ee76931b-df2e-474f-a982-9bdd6bc44522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 08:30:24.513557   10629 system_pods.go:89] "registry-proxy-24jnp" [d30eb954-1f06-43b1-98ff-136a4042942e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 08:30:24.513567   10629 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2rhhr" [2e7749cc-71ec-4279-a498-7e2827a7feff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:24.513573   10629 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pcdsj" [5d2439a5-ddb4-443b-babd-bf95b689ccb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 08:30:24.513579   10629 system_pods.go:89] "storage-provisioner" [c836f22f-4939-46e3-96ff-077c5057f334] Running
	I1124 08:30:24.513586   10629 system_pods.go:126] duration metric: took 19.465708ms to wait for k8s-apps to be running ...
	I1124 08:30:24.513594   10629 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 08:30:24.513636   10629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:30:24.606295   10629 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 08:30:24.606317   10629 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 08:30:24.709796   10629 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 08:30:24.709826   10629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 08:30:24.724701   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:24.724796   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:24.762359   10629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 08:30:24.946202   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:25.217596   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:25.218626   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:25.447542   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:25.701248   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.233621783s)
	I1124 08:30:25.701292   10629 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.187632686s)
	I1124 08:30:25.701328   10629 system_svc.go:56] duration metric: took 1.187729405s WaitForService to wait for kubelet
	I1124 08:30:25.701339   10629 kubeadm.go:587] duration metric: took 11.437550143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 08:30:25.701360   10629 node_conditions.go:102] verifying NodePressure condition ...
	I1124 08:30:25.711378   10629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 08:30:25.711408   10629 node_conditions.go:123] node cpu capacity is 2
	I1124 08:30:25.711427   10629 node_conditions.go:105] duration metric: took 10.061807ms to run NodePressure ...
	I1124 08:30:25.711441   10629 start.go:242] waiting for startup goroutines ...
	I1124 08:30:25.719894   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:25.719896   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:25.951596   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:26.265850   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:26.265954   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:26.303948   10629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.541541973s)
	I1124 08:30:26.305084   10629 addons.go:495] Verifying addon gcp-auth=true in "addons-076740"
	I1124 08:30:26.306794   10629 out.go:179] * Verifying gcp-auth addon...
	I1124 08:30:26.308343   10629 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 08:30:26.349558   10629 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 08:30:26.349586   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:26.448358   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:26.725837   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:26.728512   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:26.827197   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:26.952186   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:27.231907   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:27.231911   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:27.326799   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:27.444348   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:27.717856   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:27.718239   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:27.812969   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:27.944738   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:28.215449   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:28.216375   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:28.317775   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:28.444045   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:28.715368   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:28.717390   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:28.815124   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:28.944835   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:29.217792   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:29.219419   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:29.313001   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:29.447206   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:29.718092   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:29.721845   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:29.813711   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:29.946439   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:30.217605   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:30.222864   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:30.318802   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:30.446905   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:30.722121   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:30.722665   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:30.814540   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:30.945396   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:31.220799   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:31.226420   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:31.311487   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:31.448872   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:31.717337   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:31.717853   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:31.816300   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:31.945118   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:32.215554   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:32.216980   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:32.312085   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:32.445005   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:32.719843   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:32.719888   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:32.818217   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:32.943925   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:33.216176   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:33.216408   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:33.311880   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:33.445596   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:33.718245   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:33.719966   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:33.812703   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:33.945035   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:34.218632   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:34.218862   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:34.315053   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:34.446574   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:34.717297   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:34.718388   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:34.815100   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:34.945519   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:35.218867   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:35.220959   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:35.314468   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:35.444627   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:35.716153   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:35.724971   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:35.815247   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:35.944209   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:36.219009   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:36.223198   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:36.312478   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:36.605664   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:36.721085   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:36.721245   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:36.817705   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:36.945972   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:37.217791   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:37.218616   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:37.313941   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:37.445108   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:37.716878   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:37.717279   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:37.817115   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:37.943842   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:38.216010   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:38.216182   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:38.312489   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:38.445581   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:38.715499   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:38.717393   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:38.816598   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:38.943940   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:39.216985   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:39.217101   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:39.313186   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:39.444955   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:39.719870   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:39.724324   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:39.815686   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:39.944086   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:40.215724   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:40.217454   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:40.313719   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:40.446866   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:40.720402   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:40.720770   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:40.813001   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:40.945181   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:41.220271   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:41.226296   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:41.314634   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:41.444488   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:41.719995   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:41.722789   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:41.813394   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:41.946199   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:42.217407   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:42.218451   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:42.311467   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:42.444659   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:42.715844   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:42.718536   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:42.815192   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:42.948137   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:43.220015   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:43.222217   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:43.468695   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:43.468919   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:43.714854   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:43.718938   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:43.812828   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:43.946705   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:44.214906   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:44.216799   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:44.313869   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:44.445407   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:44.716406   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:44.718251   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:44.814594   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:44.945412   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:45.222331   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:45.222585   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:45.313047   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:45.447368   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:45.718686   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:45.720827   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:45.817338   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:45.948580   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:46.221812   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:46.222546   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:46.312838   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:46.444709   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.072978   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:47.073039   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.073098   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.073558   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.217077   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:47.219009   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.316804   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.444271   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:47.720362   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:47.720711   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:47.820262   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:47.943892   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:48.216472   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:48.217729   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:48.311639   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:48.447934   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:48.716839   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:48.716886   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:48.811984   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:48.944597   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:49.218290   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:49.218434   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:49.312306   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:49.443763   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:49.717096   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:49.719780   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:49.814376   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:49.944182   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:50.215343   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:50.217335   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:50.314006   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:50.447074   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:50.717055   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:50.719847   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:50.813138   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:50.945267   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:51.216150   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:51.216258   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:51.312822   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:51.457010   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:51.716401   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:51.717527   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:51.814731   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:51.947916   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:52.219000   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:52.221827   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:52.311902   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:52.445052   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:52.716874   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:52.717521   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:52.811622   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:52.945496   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:53.294715   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:53.294780   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:53.314583   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:53.443917   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:53.717625   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:53.717639   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:53.812421   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:53.944546   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:54.215250   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:54.218510   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:54.311878   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:54.446695   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:54.716670   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:54.717653   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:54.811725   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:54.945141   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:55.218351   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:55.218438   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:55.311366   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:55.444958   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:55.719302   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:55.719351   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:55.812231   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:55.948042   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:56.217029   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:56.219017   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:56.312724   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:56.445287   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:56.718088   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:56.721807   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:56.811963   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:56.948097   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:57.216643   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:57.218279   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:57.317104   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:57.444275   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:57.717810   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:57.720178   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:58.067773   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:58.068934   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:58.219055   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:58.219400   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:58.313215   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:58.446730   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:58.715933   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:58.717872   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:58.815337   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:58.944765   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:59.215813   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 08:30:59.216397   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:59.311662   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:59.444690   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:30:59.716588   10629 kapi.go:107] duration metric: took 37.005176936s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 08:30:59.716851   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:30:59.811951   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:30:59.945044   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:00.217240   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:00.313395   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:00.444670   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:00.716251   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:00.817604   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:00.947211   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:01.218347   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:01.315864   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:01.445351   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:01.720142   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:01.812562   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:01.947304   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:02.219820   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:02.313120   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:02.445028   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:02.717877   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:02.813623   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:02.947079   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:03.384834   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:03.506478   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:03.508305   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:03.717880   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:03.813632   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:03.945470   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:04.219072   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:04.318903   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:04.444643   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:04.718945   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:04.813249   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:04.945742   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:05.217654   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:05.312274   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:05.449551   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:05.717673   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:05.812264   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:05.949547   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:06.218554   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:06.311932   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:06.444718   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:06.716825   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:06.811964   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:06.944746   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:07.216980   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:07.312739   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:07.445683   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:07.717240   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:07.812716   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:07.947674   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:08.219677   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:08.312327   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:08.450803   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:08.716092   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:08.812649   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:08.944881   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:09.216704   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:09.312208   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:09.444406   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:09.719357   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:09.812773   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:09.944822   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:10.217274   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:10.313568   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:10.444954   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:10.720169   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:10.812153   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:10.945759   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:11.221520   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:11.315610   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:11.453776   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:11.721387   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:11.813508   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:11.944297   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:12.217809   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:12.312937   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:12.449390   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:12.717311   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:12.818100   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:12.944533   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:13.217511   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:13.311217   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:13.443992   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:13.716747   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:13.818186   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:13.950129   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:14.216812   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:14.312940   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:14.446207   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:14.718401   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:14.824342   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:14.943478   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:15.216776   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:15.312822   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:15.445553   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:15.718971   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:15.814908   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:15.947442   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:16.218592   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:16.311546   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:16.444736   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:16.722537   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:16.812651   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:16.945869   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:17.218963   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:17.318196   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:17.447112   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:17.722729   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:17.822961   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:17.949992   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:18.217208   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:18.312176   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:18.448194   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:18.719021   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:18.814511   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:18.945469   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:19.217092   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:19.313314   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:19.445316   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:19.717647   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:19.818497   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:19.945458   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:20.219027   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:20.312626   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:20.446722   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:20.718213   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:20.812235   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:20.955000   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:21.219083   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:21.317360   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:21.444352   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:21.717000   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:21.812388   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:21.950189   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:22.218428   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:22.312330   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:22.445721   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:22.716438   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:22.815598   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:23.026311   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:23.223114   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:23.313069   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:23.445273   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:23.716918   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:23.812538   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:23.945767   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:24.222891   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:24.312975   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:24.448770   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:24.717599   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:24.811732   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:24.950109   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:25.217369   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:25.312414   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:25.449424   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:25.720690   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:25.818771   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:25.955804   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:26.219291   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:26.313134   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:26.443817   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:26.716008   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:26.815339   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:26.945825   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:27.218339   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:27.317953   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:27.446929   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:27.717663   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:27.819712   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:27.947739   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:28.217305   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:28.318228   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:28.444641   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:28.719598   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:28.812848   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:28.947013   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:29.217449   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:29.313452   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:29.448182   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:29.719057   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:29.812685   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:29.947877   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:30.216215   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:30.312658   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:30.447961   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:30.717722   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:30.815540   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:30.946261   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:31.223409   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:31.322861   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:31.447558   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:31.720216   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:31.814764   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:32.015221   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:32.219959   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:32.312081   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:32.464944   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:32.716859   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:32.812499   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:32.944527   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:33.217491   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:33.314056   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:33.447483   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:33.718277   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:33.813253   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:33.945087   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:34.223336   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:34.313585   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:34.447518   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:34.717029   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:34.812919   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:34.945827   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:35.220644   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:35.313306   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:35.444356   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:35.717793   10629 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 08:31:35.814872   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:35.944950   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:36.216898   10629 kapi.go:107] duration metric: took 1m13.504205686s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 08:31:36.316846   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:36.449732   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:36.812795   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:36.944384   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:37.318904   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:37.446303   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:37.813475   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:37.944091   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 08:31:38.312108   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:38.443958   10629 kapi.go:107] duration metric: took 1m14.003822847s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 08:31:38.812417   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:39.311793   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:39.812774   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:40.313031   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:40.920187   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:41.314845   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:41.814956   10629 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 08:31:42.311819   10629 kapi.go:107] duration metric: took 1m16.003473501s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 08:31:42.313766   10629 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-076740 cluster.
	I1124 08:31:42.315402   10629 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 08:31:42.316634   10629 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 08:31:42.317997   10629 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, default-storageclass, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1124 08:31:42.319152   10629 addons.go:530] duration metric: took 1m28.055336571s for enable addons: enabled=[amd-gpu-device-plugin registry-creds default-storageclass cloud-spanner inspektor-gadget ingress-dns storage-provisioner nvidia-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1124 08:31:42.319222   10629 start.go:247] waiting for cluster config update ...
	I1124 08:31:42.319238   10629 start.go:256] writing updated cluster config ...
	I1124 08:31:42.319475   10629 ssh_runner.go:195] Run: rm -f paused
	I1124 08:31:42.327635   10629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 08:31:42.330786   10629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hf96x" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.335619   10629 pod_ready.go:94] pod "coredns-66bc5c9577-hf96x" is "Ready"
	I1124 08:31:42.335638   10629 pod_ready.go:86] duration metric: took 4.83176ms for pod "coredns-66bc5c9577-hf96x" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.338147   10629 pod_ready.go:83] waiting for pod "etcd-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.342628   10629 pod_ready.go:94] pod "etcd-addons-076740" is "Ready"
	I1124 08:31:42.342654   10629 pod_ready.go:86] duration metric: took 4.471034ms for pod "etcd-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.344377   10629 pod_ready.go:83] waiting for pod "kube-apiserver-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.348730   10629 pod_ready.go:94] pod "kube-apiserver-addons-076740" is "Ready"
	I1124 08:31:42.348754   10629 pod_ready.go:86] duration metric: took 4.360016ms for pod "kube-apiserver-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.350559   10629 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.732807   10629 pod_ready.go:94] pod "kube-controller-manager-addons-076740" is "Ready"
	I1124 08:31:42.732833   10629 pod_ready.go:86] duration metric: took 382.257941ms for pod "kube-controller-manager-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:42.932550   10629 pod_ready.go:83] waiting for pod "kube-proxy-xp75h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:43.332638   10629 pod_ready.go:94] pod "kube-proxy-xp75h" is "Ready"
	I1124 08:31:43.332663   10629 pod_ready.go:86] duration metric: took 400.089189ms for pod "kube-proxy-xp75h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:43.532009   10629 pod_ready.go:83] waiting for pod "kube-scheduler-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:43.932365   10629 pod_ready.go:94] pod "kube-scheduler-addons-076740" is "Ready"
	I1124 08:31:43.932390   10629 pod_ready.go:86] duration metric: took 400.360014ms for pod "kube-scheduler-addons-076740" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 08:31:43.932401   10629 pod_ready.go:40] duration metric: took 1.604721593s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 08:31:43.977832   10629 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 08:31:43.980080   10629 out.go:179] * Done! kubectl is now configured to use "addons-076740" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.212329439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69e3565d-b0fc-4fa1-a370-f9009b987bc9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.212393445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69e3565d-b0fc-4fa1-a370-f9009b987bc9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.212822512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504cf303ff3c2850ce573c63bd134b20b89c04e7bd82eb18f03f0f1c37c710db,PodSandboxId:cf18bb8697317204f0a5cff03cc55017659ccd08df3e408fca9ba0c7e3d56c4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763973147149348592,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03578caf-2a1a-4d02-b25e-2d01e414376a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767e499ace381579a50a4323909e991fb761545f2885b60cf377ac80b8d613f4,PodSandboxId:054c1b2e19fd439b327b3c9f92c0fb113f2adabebb60f49943de6585ba5b37ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763973108643905606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deb25b0-61ba-41f6-85a6-166e22652eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbaabc1d869fc31feae4d8ae4bc3954f605db451ded6d28afd54a19faf66888,PodSandboxId:471a29a4f6ca2f0f96b1a6ed372f8e175055bd6ae7fb084c5e439ca60c959fd0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763973095754577122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cc4l7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aa9ae47b-cc47-4d14-9994-f4a2d709171d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7d329686aa2402a9bdd1e0a9fd84a436f69f53c31fb106fe6076169dd98e10e8,PodSandboxId:323cfec09e9d73e803d642a9cec8ba27e4a884edab58ccc5f3df4a615c1f88d5,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763973081893599793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mxhcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ff207f5-08cc-47c3-83da-0a579a52b5f1,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8541ccbd7652fb3c11d949bfff77a1fe1c68744432f740ca24f212b3d153814a,PodSandboxId:58c82622e59631fe1813073781c9476e9993403addb0b15f4b034d4fc950676f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763973080981072942,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqgnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e63bd80d-c1b2-4c74-b16f-5f323261c227,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240b2bb34d832dbe15bb3fe47a865391e145d62deeecda3a59d67eaeeda5de38,PodSandboxId:2cadf4fa01b6bbdd7e3a33d4a1f82647943ef44d9b42eb7faac488e893edc8fc,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763973047194645958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ed36c-6854-4cad-96d8-b7bf66c40615,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8ff532dd2d4a9892a4a6d46c207e7dcaa397bd5cf07e95f0e2d852df570858,PodSandboxId:cdb3f16842cd11d40d99818269d3bf5888aebea83732bab8e043009caf08a9cf,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763973024663514241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-d9h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 531de084-465c-4122-81af-f1cb7db6f953,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b1908651908d45695d983206417d08987e5de94eac693df60dea52f324fe4c,PodSandboxId:e960954dfdb56d480d7edb0f51b75f5a286ed8fc795bfa40d9abc0c966ef3981,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973023479357819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c836f22f-4939-46e3-96ff-077c5057f334,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426d136def23fe866f45eccb2d1050f30b0a8a23fcbed1efd51b6eb525868124,PodSandboxId:75854f48cc3574711f97676a6d2636b959aefd1f05640b8575386c398f9abb72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763973015462860406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hf96x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb81f21-11fb-43aa-aed2-d7ec1a7bf527,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c76d520a5b824fadf127aae493aef48c565adc904459a93ae052b5d1c4dfa3a,PodSandboxId:a6d6c0388922714e4b58a50ca3061999e943a638c054636d89fedf805dffee68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763973014516949241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xp75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99affeb8-7846-448d-af12-b3b421836507,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de6ad52fdbb0f95442d74a0eccae6718cfb298393ff8f49745ae08a4b289360,PodSandboxId:e76827870eb2ff57f32a512d569180c27a9bef71d8924ee9f043086a8e00ee35,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973002891508022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569fa76a278a5d85c6c5941ee6385d54,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:678a57b8158c6342bc26c15be840e9e073f489d7b6656d3892c0b0c5c74e97ef,PodSandboxId:a05fec874f30225a45671b0cc9b8bbf317c0ceece67b39f16684fcae01f25833,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763973002871616558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c57db6fe3c0b839843ee9fa59108b3,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d76c01064fe818726a0b040dba703994b78bf454cac7a7ee1266fa7f71fc7f,PodSandboxId:3c8430f520bd2c1c30fe129601c80e4092158ba5ec04cd69b5bbffe93cb0884e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763973002859032977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365664d2bf7b2601524513eeecbefc4e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d1e5867eb1cbe8ba573016818eb267dd05a2f43e16e6ec93bb8d186f3ae517a,PodSandboxId:7231e77734ecd0f169772f88e469f275b81805d37964ca7eee123fa3e56d9719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763973002841131385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-076740,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b8f5581ebce32449f72fd687f257e517,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69e3565d-b0fc-4fa1-a370-f9009b987bc9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.239837612Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.246623458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=382869e1-71e8-437b-bd1a-d25bad0159ad name=/runtime.v1.RuntimeService/Version
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.246709684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=382869e1-71e8-437b-bd1a-d25bad0159ad name=/runtime.v1.RuntimeService/Version
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.248430218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bca0bd17-9db5-4603-82df-ee7fdc66b41b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.249807115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763973288249757925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585496,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bca0bd17-9db5-4603-82df-ee7fdc66b41b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.254305982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36481928-ae89-4f64-a62d-078fc92b4fac name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.254523973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36481928-ae89-4f64-a62d-078fc92b4fac name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.255639486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504cf303ff3c2850ce573c63bd134b20b89c04e7bd82eb18f03f0f1c37c710db,PodSandboxId:cf18bb8697317204f0a5cff03cc55017659ccd08df3e408fca9ba0c7e3d56c4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763973147149348592,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03578caf-2a1a-4d02-b25e-2d01e414376a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767e499ace381579a50a4323909e991fb761545f2885b60cf377ac80b8d613f4,PodSandboxId:054c1b2e19fd439b327b3c9f92c0fb113f2adabebb60f49943de6585ba5b37ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763973108643905606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deb25b0-61ba-41f6-85a6-166e22652eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbaabc1d869fc31feae4d8ae4bc3954f605db451ded6d28afd54a19faf66888,PodSandboxId:471a29a4f6ca2f0f96b1a6ed372f8e175055bd6ae7fb084c5e439ca60c959fd0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763973095754577122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cc4l7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aa9ae47b-cc47-4d14-9994-f4a2d709171d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7d329686aa2402a9bdd1e0a9fd84a436f69f53c31fb106fe6076169dd98e10e8,PodSandboxId:323cfec09e9d73e803d642a9cec8ba27e4a884edab58ccc5f3df4a615c1f88d5,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763973081893599793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mxhcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ff207f5-08cc-47c3-83da-0a579a52b5f1,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8541ccbd7652fb3c11d949bfff77a1fe1c68744432f740ca24f212b3d153814a,PodSandboxId:58c82622e59631fe1813073781c9476e9993403addb0b15f4b034d4fc950676f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763973080981072942,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqgnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e63bd80d-c1b2-4c74-b16f-5f323261c227,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240b2bb34d832dbe15bb3fe47a865391e145d62deeecda3a59d67eaeeda5de38,PodSandboxId:2cadf4fa01b6bbdd7e3a33d4a1f82647943ef44d9b42eb7faac488e893edc8fc,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763973047194645958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ed36c-6854-4cad-96d8-b7bf66c40615,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8ff532dd2d4a9892a4a6d46c207e7dcaa397bd5cf07e95f0e2d852df570858,PodSandboxId:cdb3f16842cd11d40d99818269d3bf5888aebea83732bab8e043009caf08a9cf,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763973024663514241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-d9h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 531de084-465c-4122-81af-f1cb7db6f953,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b1908651908d45695d983206417d08987e5de94eac693df60dea52f324fe4c,PodSandboxId:e960954dfdb56d480d7edb0f51b75f5a286ed8fc795bfa40d9abc0c966ef3981,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973023479357819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c836f22f-4939-46e3-96ff-077c5057f334,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426d136def23fe866f45eccb2d1050f30b0a8a23fcbed1efd51b6eb525868124,PodSandboxId:75854f48cc3574711f97676a6d2636b959aefd1f05640b8575386c398f9abb72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763973015462860406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hf96x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb81f21-11fb-43aa-aed2-d7ec1a7bf527,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c76d520a5b824fadf127aae493aef48c565adc904459a93ae052b5d1c4dfa3a,PodSandboxId:a6d6c0388922714e4b58a50ca3061999e943a638c054636d89fedf805dffee68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763973014516949241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xp75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99affeb8-7846-448d-af12-b3b421836507,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de6ad52fdbb0f95442d74a0eccae6718cfb298393ff8f49745ae08a4b289360,PodSandboxId:e76827870eb2ff57f32a512d569180c27a9bef71d8924ee9f043086a8e00ee35,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973002891508022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569fa76a278a5d85c6c5941ee6385d54,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:678a57b8158c6342bc26c15be840e9e073f489d7b6656d3892c0b0c5c74e97ef,PodSandboxId:a05fec874f30225a45671b0cc9b8bbf317c0ceece67b39f16684fcae01f25833,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763973002871616558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c57db6fe3c0b839843ee9fa59108b3,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d76c01064fe818726a0b040dba703994b78bf454cac7a7ee1266fa7f71fc7f,PodSandboxId:3c8430f520bd2c1c30fe129601c80e4092158ba5ec04cd69b5bbffe93cb0884e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763973002859032977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365664d2bf7b2601524513eeecbefc4e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d1e5867eb1cbe8ba573016818eb267dd05a2f43e16e6ec93bb8d186f3ae517a,PodSandboxId:7231e77734ecd0f169772f88e469f275b81805d37964ca7eee123fa3e56d9719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763973002841131385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-076740,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b8f5581ebce32449f72fd687f257e517,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36481928-ae89-4f64-a62d-078fc92b4fac name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.287472929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47822726-3982-4261-9d9d-cc356d494618 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.287554315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47822726-3982-4261-9d9d-cc356d494618 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.289864104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c49ac7a-f71f-4493-911d-baa3e28179e9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.292235814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763973288292162662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585496,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c49ac7a-f71f-4493-911d-baa3e28179e9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.293189099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51731748-fde9-4c38-8bfb-25a5f5936586 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.293268745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51731748-fde9-4c38-8bfb-25a5f5936586 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.294356207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504cf303ff3c2850ce573c63bd134b20b89c04e7bd82eb18f03f0f1c37c710db,PodSandboxId:cf18bb8697317204f0a5cff03cc55017659ccd08df3e408fca9ba0c7e3d56c4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763973147149348592,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03578caf-2a1a-4d02-b25e-2d01e414376a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767e499ace381579a50a4323909e991fb761545f2885b60cf377ac80b8d613f4,PodSandboxId:054c1b2e19fd439b327b3c9f92c0fb113f2adabebb60f49943de6585ba5b37ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763973108643905606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deb25b0-61ba-41f6-85a6-166e22652eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbaabc1d869fc31feae4d8ae4bc3954f605db451ded6d28afd54a19faf66888,PodSandboxId:471a29a4f6ca2f0f96b1a6ed372f8e175055bd6ae7fb084c5e439ca60c959fd0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763973095754577122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cc4l7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aa9ae47b-cc47-4d14-9994-f4a2d709171d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7d329686aa2402a9bdd1e0a9fd84a436f69f53c31fb106fe6076169dd98e10e8,PodSandboxId:323cfec09e9d73e803d642a9cec8ba27e4a884edab58ccc5f3df4a615c1f88d5,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763973081893599793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mxhcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ff207f5-08cc-47c3-83da-0a579a52b5f1,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8541ccbd7652fb3c11d949bfff77a1fe1c68744432f740ca24f212b3d153814a,PodSandboxId:58c82622e59631fe1813073781c9476e9993403addb0b15f4b034d4fc950676f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763973080981072942,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqgnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e63bd80d-c1b2-4c74-b16f-5f323261c227,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240b2bb34d832dbe15bb3fe47a865391e145d62deeecda3a59d67eaeeda5de38,PodSandboxId:2cadf4fa01b6bbdd7e3a33d4a1f82647943ef44d9b42eb7faac488e893edc8fc,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763973047194645958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ed36c-6854-4cad-96d8-b7bf66c40615,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8ff532dd2d4a9892a4a6d46c207e7dcaa397bd5cf07e95f0e2d852df570858,PodSandboxId:cdb3f16842cd11d40d99818269d3bf5888aebea83732bab8e043009caf08a9cf,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763973024663514241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-d9h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 531de084-465c-4122-81af-f1cb7db6f953,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b1908651908d45695d983206417d08987e5de94eac693df60dea52f324fe4c,PodSandboxId:e960954dfdb56d480d7edb0f51b75f5a286ed8fc795bfa40d9abc0c966ef3981,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973023479357819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c836f22f-4939-46e3-96ff-077c5057f334,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426d136def23fe866f45eccb2d1050f30b0a8a23fcbed1efd51b6eb525868124,PodSandboxId:75854f48cc3574711f97676a6d2636b959aefd1f05640b8575386c398f9abb72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763973015462860406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hf96x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb81f21-11fb-43aa-aed2-d7ec1a7bf527,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c76d520a5b824fadf127aae493aef48c565adc904459a93ae052b5d1c4dfa3a,PodSandboxId:a6d6c0388922714e4b58a50ca3061999e943a638c054636d89fedf805dffee68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763973014516949241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xp75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99affeb8-7846-448d-af12-b3b421836507,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de6ad52fdbb0f95442d74a0eccae6718cfb298393ff8f49745ae08a4b289360,PodSandboxId:e76827870eb2ff57f32a512d569180c27a9bef71d8924ee9f043086a8e00ee35,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973002891508022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569fa76a278a5d85c6c5941ee6385d54,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:678a57b8158c6342bc26c15be840e9e073f489d7b6656d3892c0b0c5c74e97ef,PodSandboxId:a05fec874f30225a45671b0cc9b8bbf317c0ceece67b39f16684fcae01f25833,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763973002871616558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c57db6fe3c0b839843ee9fa59108b3,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d76c01064fe818726a0b040dba703994b78bf454cac7a7ee1266fa7f71fc7f,PodSandboxId:3c8430f520bd2c1c30fe129601c80e4092158ba5ec04cd69b5bbffe93cb0884e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763973002859032977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365664d2bf7b2601524513eeecbefc4e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d1e5867eb1cbe8ba573016818eb267dd05a2f43e16e6ec93bb8d186f3ae517a,PodSandboxId:7231e77734ecd0f169772f88e469f275b81805d37964ca7eee123fa3e56d9719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763973002841131385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-076740,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b8f5581ebce32449f72fd687f257e517,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51731748-fde9-4c38-8bfb-25a5f5936586 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.325806317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d44c058-ac95-44d7-9d74-4724c988c33d name=/runtime.v1.RuntimeService/Version
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.325899453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d44c058-ac95-44d7-9d74-4724c988c33d name=/runtime.v1.RuntimeService/Version
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.327253042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53afb1bb-0537-4183-ad2a-8adecf40c3e7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.330017224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763973288329937106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585496,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53afb1bb-0537-4183-ad2a-8adecf40c3e7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.331097342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb865e16-e117-404a-b21a-10ac63368c14 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.331177399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb865e16-e117-404a-b21a-10ac63368c14 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:34:48 addons-076740 crio[805]: time="2025-11-24 08:34:48.331888604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504cf303ff3c2850ce573c63bd134b20b89c04e7bd82eb18f03f0f1c37c710db,PodSandboxId:cf18bb8697317204f0a5cff03cc55017659ccd08df3e408fca9ba0c7e3d56c4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763973147149348592,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03578caf-2a1a-4d02-b25e-2d01e414376a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767e499ace381579a50a4323909e991fb761545f2885b60cf377ac80b8d613f4,PodSandboxId:054c1b2e19fd439b327b3c9f92c0fb113f2adabebb60f49943de6585ba5b37ba,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763973108643905606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deb25b0-61ba-41f6-85a6-166e22652eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbaabc1d869fc31feae4d8ae4bc3954f605db451ded6d28afd54a19faf66888,PodSandboxId:471a29a4f6ca2f0f96b1a6ed372f8e175055bd6ae7fb084c5e439ca60c959fd0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763973095754577122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cc4l7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aa9ae47b-cc47-4d14-9994-f4a2d709171d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7d329686aa2402a9bdd1e0a9fd84a436f69f53c31fb106fe6076169dd98e10e8,PodSandboxId:323cfec09e9d73e803d642a9cec8ba27e4a884edab58ccc5f3df4a615c1f88d5,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763973081893599793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mxhcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ff207f5-08cc-47c3-83da-0a579a52b5f1,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8541ccbd7652fb3c11d949bfff77a1fe1c68744432f740ca24f212b3d153814a,PodSandboxId:58c82622e59631fe1813073781c9476e9993403addb0b15f4b034d4fc950676f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763973080981072942,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqgnj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e63bd80d-c1b2-4c74-b16f-5f323261c227,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240b2bb34d832dbe15bb3fe47a865391e145d62deeecda3a59d67eaeeda5de38,PodSandboxId:2cadf4fa01b6bbdd7e3a33d4a1f82647943ef44d9b42eb7faac488e893edc8fc,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763973047194645958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ed36c-6854-4cad-96d8-b7bf66c40615,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8ff532dd2d4a9892a4a6d46c207e7dcaa397bd5cf07e95f0e2d852df570858,PodSandboxId:cdb3f16842cd11d40d99818269d3bf5888aebea83732bab8e043009caf08a9cf,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763973024663514241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-d9h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 531de084-465c-4122-81af-f1cb7db6f953,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b1908651908d45695d983206417d08987e5de94eac693df60dea52f324fe4c,PodSandboxId:e960954dfdb56d480d7edb0f51b75f5a286ed8fc795bfa40d9abc0c966ef3981,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973023479357819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c836f22f-4939-46e3-96ff-077c5057f334,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426d136def23fe866f45eccb2d1050f30b0a8a23fcbed1efd51b6eb525868124,PodSandboxId:75854f48cc3574711f97676a6d2636b959aefd1f05640b8575386c398f9abb72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763973015462860406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hf96x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb81f21-11fb-43aa-aed2-d7ec1a7bf527,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c76d520a5b824fadf127aae493aef48c565adc904459a93ae052b5d1c4dfa3a,PodSandboxId:a6d6c0388922714e4b58a50ca3061999e943a638c054636d89fedf805dffee68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763973014516949241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xp75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99affeb8-7846-448d-af12-b3b421836507,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de6ad52fdbb0f95442d74a0eccae6718cfb298393ff8f49745ae08a4b289360,PodSandboxId:e76827870eb2ff57f32a512d569180c27a9bef71d8924ee9f043086a8e00ee35,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973002891508022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569fa76a278a5d85c6c5941ee6385d54,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:678a57b8158c6342bc26c15be840e9e073f489d7b6656d3892c0b0c5c74e97ef,PodSandboxId:a05fec874f30225a45671b0cc9b8bbf317c0ceece67b39f16684fcae01f25833,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763973002871616558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c57db6fe3c0b839843ee9fa59108b3,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d76c01064fe818726a0b040dba703994b78bf454cac7a7ee1266fa7f71fc7f,PodSandboxId:3c8430f520bd2c1c30fe129601c80e4092158ba5ec04cd69b5bbffe93cb0884e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763973002859032977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-076740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365664d2bf7b2601524513eeecbefc4e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d1e5867eb1cbe8ba573016818eb267dd05a2f43e16e6ec93bb8d186f3ae517a,PodSandboxId:7231e77734ecd0f169772f88e469f275b81805d37964ca7eee123fa3e56d9719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763973002841131385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-076740,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b8f5581ebce32449f72fd687f257e517,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb865e16-e117-404a-b21a-10ac63368c14 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	504cf303ff3c2       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   cf18bb8697317       nginx                                      default
	767e499ace381       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   054c1b2e19fd4       busybox                                    default
	cdbaabc1d869f       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   471a29a4f6ca2       ingress-nginx-controller-6c8bf45fb-cc4l7   ingress-nginx
	7d329686aa240       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago       Exited              patch                     1                   323cfec09e9d7       ingress-nginx-admission-patch-mxhcs        ingress-nginx
	8541ccbd7652f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   58c82622e5963       ingress-nginx-admission-create-hqgnj       ingress-nginx
	240b2bb34d832       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   2cadf4fa01b6b       kube-ingress-dns-minikube                  kube-system
	0f8ff532dd2d4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   cdb3f16842cd1       amd-gpu-device-plugin-d9h4q                kube-system
	61b1908651908       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   e960954dfdb56       storage-provisioner                        kube-system
	426d136def23f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   75854f48cc357       coredns-66bc5c9577-hf96x                   kube-system
	9c76d520a5b82       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   a6d6c03889227       kube-proxy-xp75h                           kube-system
	7de6ad52fdbb0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   e76827870eb2f       etcd-addons-076740                         kube-system
	678a57b8158c6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   a05fec874f302       kube-scheduler-addons-076740               kube-system
	75d76c01064fe       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   3c8430f520bd2       kube-apiserver-addons-076740               kube-system
	8d1e5867eb1cb       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   7231e77734ecd       kube-controller-manager-addons-076740      kube-system
	
	
	==> coredns [426d136def23fe866f45eccb2d1050f30b0a8a23fcbed1efd51b6eb525868124] <==
	[INFO] 10.244.0.8:33605 - 5706 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000373903s
	[INFO] 10.244.0.8:33605 - 39294 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000088358s
	[INFO] 10.244.0.8:33605 - 24371 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000142335s
	[INFO] 10.244.0.8:33605 - 15125 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00014836s
	[INFO] 10.244.0.8:33605 - 42798 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000105574s
	[INFO] 10.244.0.8:33605 - 1382 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000108121s
	[INFO] 10.244.0.8:33605 - 3411 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000084923s
	[INFO] 10.244.0.8:52438 - 26033 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000190962s
	[INFO] 10.244.0.8:52438 - 26393 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145704s
	[INFO] 10.244.0.8:36752 - 48972 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083282s
	[INFO] 10.244.0.8:36752 - 48711 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000216437s
	[INFO] 10.244.0.8:59974 - 10187 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000150968s
	[INFO] 10.244.0.8:59974 - 10471 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114716s
	[INFO] 10.244.0.8:50671 - 35182 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012746s
	[INFO] 10.244.0.8:50671 - 34973 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186322s
	[INFO] 10.244.0.23:40880 - 4193 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000501902s
	[INFO] 10.244.0.23:33673 - 8187 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000085801s
	[INFO] 10.244.0.23:57307 - 47554 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144599s
	[INFO] 10.244.0.23:56737 - 47942 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000375809s
	[INFO] 10.244.0.23:49750 - 38768 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088566s
	[INFO] 10.244.0.23:43818 - 13162 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125186s
	[INFO] 10.244.0.23:39690 - 50085 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001554123s
	[INFO] 10.244.0.23:41480 - 26240 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.003818833s
	[INFO] 10.244.0.27:37413 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290401s
	[INFO] 10.244.0.27:53022 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110155s
	
	
	==> describe nodes <==
	Name:               addons-076740
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-076740
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=addons-076740
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_30_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-076740
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:30:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-076740
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:34:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:33:12 +0000   Mon, 24 Nov 2025 08:30:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:33:12 +0000   Mon, 24 Nov 2025 08:30:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:33:12 +0000   Mon, 24 Nov 2025 08:30:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:33:12 +0000   Mon, 24 Nov 2025 08:30:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    addons-076740
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcb7093292ae4333adfcd5e85155dbf4
	  System UUID:                bcb70932-92ae-4333-adfc-d5e85155dbf4
	  Boot ID:                    7d97926b-ccca-4b79-b57f-c8f7c314371c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-5d498dc89-k5qkw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-cc4l7    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m26s
	  kube-system                 amd-gpu-device-plugin-d9h4q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-66bc5c9577-hf96x                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m34s
	  kube-system                 etcd-addons-076740                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m40s
	  kube-system                 kube-apiserver-addons-076740                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-controller-manager-addons-076740       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-xp75h                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-076740                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m33s  kube-proxy       
	  Normal  Starting                 4m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s  kubelet          Node addons-076740 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s  kubelet          Node addons-076740 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s  kubelet          Node addons-076740 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s  kubelet          Node addons-076740 status is now: NodeReady
	  Normal  RegisteredNode           4m36s  node-controller  Node addons-076740 event: Registered Node addons-076740 in Controller
	
	
	==> dmesg <==
	[  +0.009213] kauditd_printk_skb: 219 callbacks suppressed
	[  +4.982866] kauditd_printk_skb: 499 callbacks suppressed
	[  +5.702169] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.689248] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.318101] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.290670] kauditd_printk_skb: 17 callbacks suppressed
	[Nov24 08:31] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.623568] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.784708] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.774565] kauditd_printk_skb: 174 callbacks suppressed
	[  +0.000057] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.289987] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.101465] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.170560] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.284631] kauditd_printk_skb: 17 callbacks suppressed
	[Nov24 08:32] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.924198] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.544489] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.360434] kauditd_printk_skb: 114 callbacks suppressed
	[  +2.669708] kauditd_printk_skb: 142 callbacks suppressed
	[  +2.691798] kauditd_printk_skb: 140 callbacks suppressed
	[  +0.682244] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.757744] kauditd_printk_skb: 157 callbacks suppressed
	[Nov24 08:33] kauditd_printk_skb: 10 callbacks suppressed
	[Nov24 08:34] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [7de6ad52fdbb0f95442d74a0eccae6718cfb298393ff8f49745ae08a4b289360] <==
	{"level":"info","ts":"2025-11-24T08:31:15.694745Z","caller":"traceutil/trace.go:172","msg":"trace[1936464198] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:1018; }","duration":"215.452471ms","start":"2025-11-24T08:31:15.479287Z","end":"2025-11-24T08:31:15.694739Z","steps":["trace[1936464198] 'agreement among raft nodes before linearized reading'  (duration: 215.333016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:31:15.694829Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.164153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:31:15.694866Z","caller":"traceutil/trace.go:172","msg":"trace[1637194399] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1019; }","duration":"124.206926ms","start":"2025-11-24T08:31:15.570651Z","end":"2025-11-24T08:31:15.694858Z","steps":["trace[1637194399] 'agreement among raft nodes before linearized reading'  (duration: 124.146598ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:31:15.695194Z","caller":"traceutil/trace.go:172","msg":"trace[576028405] transaction","detail":"{read_only:false; response_revision:1019; number_of_response:1; }","duration":"218.588947ms","start":"2025-11-24T08:31:15.476598Z","end":"2025-11-24T08:31:15.695187Z","steps":["trace[576028405] 'process raft request'  (duration: 218.094555ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:31:23.005478Z","caller":"traceutil/trace.go:172","msg":"trace[978978417] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"123.918987ms","start":"2025-11-24T08:31:22.881545Z","end":"2025-11-24T08:31:23.005464Z","steps":["trace[978978417] 'process raft request'  (duration: 123.821176ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:31:31.997259Z","caller":"traceutil/trace.go:172","msg":"trace[59841378] linearizableReadLoop","detail":"{readStateIndex:1167; appliedIndex:1167; }","duration":"129.690774ms","start":"2025-11-24T08:31:31.867551Z","end":"2025-11-24T08:31:31.997242Z","steps":["trace[59841378] 'read index received'  (duration: 129.685875ms)","trace[59841378] 'applied index is now lower than readState.Index'  (duration: 4.133µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T08:31:31.997427Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.878667ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:31:31.997453Z","caller":"traceutil/trace.go:172","msg":"trace[1353081397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1131; }","duration":"129.923685ms","start":"2025-11-24T08:31:31.867523Z","end":"2025-11-24T08:31:31.997446Z","steps":["trace[1353081397] 'agreement among raft nodes before linearized reading'  (duration: 129.826067ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:31:31.998418Z","caller":"traceutil/trace.go:172","msg":"trace[56772045] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"164.332184ms","start":"2025-11-24T08:31:31.834077Z","end":"2025-11-24T08:31:31.998409Z","steps":["trace[56772045] 'process raft request'  (duration: 163.98437ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:31:40.909350Z","caller":"traceutil/trace.go:172","msg":"trace[437060773] linearizableReadLoop","detail":"{readStateIndex:1198; appliedIndex:1198; }","duration":"168.536651ms","start":"2025-11-24T08:31:40.740786Z","end":"2025-11-24T08:31:40.909323Z","steps":["trace[437060773] 'read index received'  (duration: 168.530051ms)","trace[437060773] 'applied index is now lower than readState.Index'  (duration: 5.294µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T08:31:40.909500Z","caller":"traceutil/trace.go:172","msg":"trace[356141298] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"172.225166ms","start":"2025-11-24T08:31:40.737264Z","end":"2025-11-24T08:31:40.909489Z","steps":["trace[356141298] 'process raft request'  (duration: 172.123988ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:31:40.909555Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.737483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-11-24T08:31:40.909575Z","caller":"traceutil/trace.go:172","msg":"trace[612709967] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1160; }","duration":"168.788232ms","start":"2025-11-24T08:31:40.740782Z","end":"2025-11-24T08:31:40.909570Z","steps":["trace[612709967] 'agreement among raft nodes before linearized reading'  (duration: 168.669976ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:31:40.910041Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.520817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:31:40.910137Z","caller":"traceutil/trace.go:172","msg":"trace[266190114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1161; }","duration":"103.686988ms","start":"2025-11-24T08:31:40.806443Z","end":"2025-11-24T08:31:40.910130Z","steps":["trace[266190114] 'agreement among raft nodes before linearized reading'  (duration: 103.498507ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:32:07.369659Z","caller":"traceutil/trace.go:172","msg":"trace[799583894] transaction","detail":"{read_only:false; response_revision:1327; number_of_response:1; }","duration":"120.345739ms","start":"2025-11-24T08:32:07.249291Z","end":"2025-11-24T08:32:07.369637Z","steps":["trace[799583894] 'process raft request'  (duration: 120.116738ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:32:08.422397Z","caller":"traceutil/trace.go:172","msg":"trace[901113619] transaction","detail":"{read_only:false; response_revision:1329; number_of_response:1; }","duration":"132.854057ms","start":"2025-11-24T08:32:08.289521Z","end":"2025-11-24T08:32:08.422375Z","steps":["trace[901113619] 'process raft request'  (duration: 132.7395ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:32:14.741324Z","caller":"traceutil/trace.go:172","msg":"trace[936652525] transaction","detail":"{read_only:false; response_revision:1403; number_of_response:1; }","duration":"187.96227ms","start":"2025-11-24T08:32:14.553347Z","end":"2025-11-24T08:32:14.741310Z","steps":["trace[936652525] 'process raft request'  (duration: 187.590711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:32:16.968326Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.577757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" limit:1 ","response":"range_response_count:1 size:446"}
	{"level":"info","ts":"2025-11-24T08:32:16.968387Z","caller":"traceutil/trace.go:172","msg":"trace[355336924] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:1413; }","duration":"123.657505ms","start":"2025-11-24T08:32:16.844715Z","end":"2025-11-24T08:32:16.968373Z","steps":["trace[355336924] 'agreement among raft nodes before linearized reading'  (duration: 51.518083ms)","trace[355336924] 'range keys from in-memory index tree'  (duration: 71.971215ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T08:32:16.968642Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.240466ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:32:16.968716Z","caller":"traceutil/trace.go:172","msg":"trace[90702091] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1413; }","duration":"101.322798ms","start":"2025-11-24T08:32:16.867385Z","end":"2025-11-24T08:32:16.968708Z","steps":["trace[90702091] 'range keys from in-memory index tree'  (duration: 101.228171ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:32:16.969193Z","caller":"traceutil/trace.go:172","msg":"trace[2071462556] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"186.968973ms","start":"2025-11-24T08:32:16.782214Z","end":"2025-11-24T08:32:16.969183Z","steps":["trace[2071462556] 'process raft request'  (duration: 114.054077ms)","trace[2071462556] 'compare'  (duration: 72.391451ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T08:32:16.970899Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.776819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:32:16.971307Z","caller":"traceutil/trace.go:172","msg":"trace[1921554936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1414; }","duration":"123.185244ms","start":"2025-11-24T08:32:16.848112Z","end":"2025-11-24T08:32:16.971297Z","steps":["trace[1921554936] 'agreement among raft nodes before linearized reading'  (duration: 122.707587ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:34:48 up 5 min,  0 users,  load average: 0.59, 1.32, 0.70
	Linux addons-076740 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [75d76c01064fe818726a0b040dba703994b78bf454cac7a7ee1266fa7f71fc7f] <==
	 > logger="UnhandledError"
	E1124 08:30:55.496831       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.54.96:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.54.96:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.54.96:443: connect: connection refused" logger="UnhandledError"
	E1124 08:30:55.498553       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.54.96:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.54.96:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.54.96:443: connect: connection refused" logger="UnhandledError"
	I1124 08:30:55.551791       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 08:31:54.738364       1 conn.go:339] Error on socket receive: read tcp 192.168.39.17:8443->192.168.39.1:48944: use of closed network connection
	I1124 08:32:04.049746       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.12.141"}
	I1124 08:32:22.267557       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 08:32:22.465424       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.37.193"}
	I1124 08:32:23.956933       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 08:32:38.906734       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 08:32:38.908306       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 08:32:38.946099       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 08:32:38.946181       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 08:32:38.957835       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 08:32:38.957931       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 08:32:38.980712       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 08:32:38.980851       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 08:32:39.007103       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 08:32:39.007605       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1124 08:32:39.958464       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1124 08:32:40.008770       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1124 08:32:40.035876       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1124 08:32:56.516855       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1124 08:33:00.156082       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1124 08:34:47.245364       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.107.96"}
	
	
	==> kube-controller-manager [8d1e5867eb1cbe8ba573016818eb267dd05a2f43e16e6ec93bb8d186f3ae517a] <==
	E1124 08:32:49.743832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:32:54.209068       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:32:54.210169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:32:57.862941       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:32:57.863956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:33:01.193366       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:33:01.194446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:33:16.016782       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:33:16.017796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:33:18.707173       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:33:18.708245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:33:19.569691       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:33:19.570786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:33:45.406460       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:33:45.407753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:33:46.983261       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:33:46.985070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:34:02.315787       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:34:02.316724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:34:38.581783       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:34:38.582695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:34:43.086123       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:34:43.087097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 08:34:44.040723       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 08:34:44.041765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [9c76d520a5b824fadf127aae493aef48c565adc904459a93ae052b5d1c4dfa3a] <==
	I1124 08:30:15.062950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 08:30:15.164845       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 08:30:15.164883       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.17"]
	E1124 08:30:15.165005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:30:15.445540       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 08:30:15.445678       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 08:30:15.445710       1 server_linux.go:132] "Using iptables Proxier"
	I1124 08:30:15.485930       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:30:15.489366       1 server.go:527] "Version info" version="v1.34.2"
	I1124 08:30:15.489398       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:30:15.512596       1 config.go:200] "Starting service config controller"
	I1124 08:30:15.512704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:30:15.512797       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:30:15.512804       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:30:15.512846       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:30:15.512851       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:30:15.515886       1 config.go:309] "Starting node config controller"
	I1124 08:30:15.515916       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:30:15.515924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:30:15.613448       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:30:15.613488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:30:15.613531       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [678a57b8158c6342bc26c15be840e9e073f489d7b6656d3892c0b0c5c74e97ef] <==
	E1124 08:30:05.941438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:30:05.941493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 08:30:05.941506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 08:30:05.941549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 08:30:05.941670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 08:30:05.941748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:30:05.941841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:30:05.941925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 08:30:05.941918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 08:30:05.942203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:30:05.942269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 08:30:06.810312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 08:30:06.817576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 08:30:06.833629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 08:30:06.851120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 08:30:06.861414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 08:30:06.865303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 08:30:06.954181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 08:30:07.150846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 08:30:07.160035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 08:30:07.225922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 08:30:07.227503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 08:30:07.303226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 08:30:07.341571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 08:30:09.133617       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 08:33:15 addons-076740 kubelet[1494]: I1124 08:33:15.637890    1494 scope.go:117] "RemoveContainer" containerID="df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc"
	Nov 24 08:33:15 addons-076740 kubelet[1494]: E1124 08:33:15.638965    1494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc\": container with ID starting with df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc not found: ID does not exist" containerID="df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc"
	Nov 24 08:33:15 addons-076740 kubelet[1494]: I1124 08:33:15.639076    1494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc"} err="failed to get container status \"df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc\": rpc error: code = NotFound desc = could not find container \"df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc\": container with ID starting with df84ea063db508bccaeb5fa24f5d0e55a4c6020ff991447cfa66e312905f8acc not found: ID does not exist"
	Nov 24 08:33:16 addons-076740 kubelet[1494]: I1124 08:33:16.733912    1494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b119a96d-7c24-4537-a0f0-63790879b2e0" path="/var/lib/kubelet/pods/b119a96d-7c24-4537-a0f0-63790879b2e0/volumes"
	Nov 24 08:33:18 addons-076740 kubelet[1494]: E1124 08:33:18.865453    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973198864820543 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:18 addons-076740 kubelet[1494]: E1124 08:33:18.865491    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973198864820543 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:28 addons-076740 kubelet[1494]: E1124 08:33:28.869124    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973208868722053 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:28 addons-076740 kubelet[1494]: E1124 08:33:28.869148    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973208868722053 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:38 addons-076740 kubelet[1494]: E1124 08:33:38.872168    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973218871606314 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:38 addons-076740 kubelet[1494]: E1124 08:33:38.872216    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973218871606314 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:48 addons-076740 kubelet[1494]: E1124 08:33:48.875017    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973228874382394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:48 addons-076740 kubelet[1494]: E1124 08:33:48.875053    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973228874382394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:58 addons-076740 kubelet[1494]: E1124 08:33:58.878393    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973238877835371 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:33:58 addons-076740 kubelet[1494]: E1124 08:33:58.878419    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973238877835371 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:08 addons-076740 kubelet[1494]: E1124 08:34:08.880837    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973248880413466 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:08 addons-076740 kubelet[1494]: E1124 08:34:08.880888    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973248880413466 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:18 addons-076740 kubelet[1494]: E1124 08:34:18.884279    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973258883724269 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:18 addons-076740 kubelet[1494]: E1124 08:34:18.884320    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973258883724269 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:28 addons-076740 kubelet[1494]: E1124 08:34:28.887386    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973268886695424 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:28 addons-076740 kubelet[1494]: E1124 08:34:28.887410    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973268886695424 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:29 addons-076740 kubelet[1494]: I1124 08:34:29.722128    1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:34:33 addons-076740 kubelet[1494]: I1124 08:34:33.721382    1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-d9h4q" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 08:34:38 addons-076740 kubelet[1494]: E1124 08:34:38.889548    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763973278889191975 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:38 addons-076740 kubelet[1494]: E1124 08:34:38.889572    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763973278889191975 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Nov 24 08:34:47 addons-076740 kubelet[1494]: I1124 08:34:47.246017    1494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xjz\" (UniqueName: \"kubernetes.io/projected/448c4253-0335-49c0-a4df-d78fcf0116f8-kube-api-access-48xjz\") pod \"hello-world-app-5d498dc89-k5qkw\" (UID: \"448c4253-0335-49c0-a4df-d78fcf0116f8\") " pod="default/hello-world-app-5d498dc89-k5qkw"
	
	
	==> storage-provisioner [61b1908651908d45695d983206417d08987e5de94eac693df60dea52f324fe4c] <==
	W1124 08:34:23.743351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:25.746950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:25.751910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:27.755562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:27.761102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:29.764455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:29.770929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:31.776275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:31.784097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:33.787683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:33.792935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:35.796932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:35.801700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:37.805256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:37.812301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:39.815663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:39.821556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:41.825636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:41.831754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:43.835837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:43.843524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:45.847358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:45.855357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:47.860496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:34:47.868365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-076740 -n addons-076740
helpers_test.go:269: (dbg) Run:  kubectl --context addons-076740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-k5qkw ingress-nginx-admission-create-hqgnj ingress-nginx-admission-patch-mxhcs
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-076740 describe pod hello-world-app-5d498dc89-k5qkw ingress-nginx-admission-create-hqgnj ingress-nginx-admission-patch-mxhcs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-076740 describe pod hello-world-app-5d498dc89-k5qkw ingress-nginx-admission-create-hqgnj ingress-nginx-admission-patch-mxhcs: exit status 1 (74.984309ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-k5qkw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-076740/192.168.39.17
	Start Time:       Mon, 24 Nov 2025 08:34:47 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48xjz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-48xjz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-k5qkw to addons-076740
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hqgnj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mxhcs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-076740 describe pod hello-world-app-5d498dc89-k5qkw ingress-nginx-admission-create-hqgnj ingress-nginx-admission-patch-mxhcs: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable ingress-dns --alsologtostderr -v=1: (1.625744467s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable ingress --alsologtostderr -v=1: (7.741427102s)
--- FAIL: TestAddons/parallel/Ingress (156.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a411fdbe-a695-4b8a-87a7-4e059588cd68] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005440141s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-014740 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-014740 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-014740 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-014740 apply -f testdata/storage-provisioner/pod.yaml
I1124 08:46:06.396332    9629 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [619282a4-15ce-4cc0-b729-c1ded2320f4e] Pending
helpers_test.go:352: "sp-pod" [619282a4-15ce-4cc0-b729-c1ded2320f4e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-014740 -n functional-014740
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-24 08:52:06.62063578 +0000 UTC m=+1406.005983243
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-014740 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-014740 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-014740/192.168.39.85
Start Time:       Mon, 24 Nov 2025 08:46:06 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5k24d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-5k24d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-014740
Normal   Pulled     5m51s                kubelet            Successfully pulled image "docker.io/nginx" in 7.729s (8.804s including waiting). Image size: 155491845 bytes.
Warning  Failed     5m34s                kubelet            Error: container create failed: time="2025-11-24T08:46:15Z" level=error msg="runc create failed: unable to start container process: error during container init: exec: \"/docker-entrypoint.sh\": stat /docker-entrypoint.sh: no such file or directory"
Normal   Pulling    102s (x5 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     70s (x4 over 4m40s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     70s (x4 over 4m40s)  kubelet            Error: ErrImagePull
Normal   BackOff    7s (x10 over 4m40s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     7s (x10 over 4m40s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-014740 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-014740 logs sp-pod -n default: exit status 1 (69.351582ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-014740 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-014740 -n functional-014740
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 logs -n 25: (1.348881793s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-014740 ssh sudo cat /etc/ssl/certs/96292.pem                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh sudo cat /usr/share/ca-certificates/96292.pem                                                                                          │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh sudo cat /etc/test/nested/copy/9629/hosts                                                                                              │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr                                                                │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr                                                                │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image save kicbase/echo-server:functional-014740 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image rm kicbase/echo-server:functional-014740 --alsologtostderr                                                                           │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image save --daemon kicbase/echo-server:functional-014740 --alsologtostderr                                                                │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ update-context │ functional-014740 update-context --alsologtostderr -v=2                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ update-context │ functional-014740 update-context --alsologtostderr -v=2                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ update-context │ functional-014740 update-context --alsologtostderr -v=2                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format short --alsologtostderr                                                                                                  │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format yaml --alsologtostderr                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh pgrep buildkitd                                                                                                                        │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │                     │
	│ image          │ functional-014740 image build -t localhost/my-image:functional-014740 testdata/build --alsologtostderr                                                       │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format json --alsologtostderr                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format table --alsologtostderr                                                                                                  │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:46:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:46:11.714620   19393 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:46:11.714773   19393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.714785   19393 out.go:374] Setting ErrFile to fd 2...
	I1124 08:46:11.714792   19393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.715248   19393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:46:11.715862   19393 out.go:368] Setting JSON to false
	I1124 08:46:11.717025   19393 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1708,"bootTime":1763972264,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:46:11.717101   19393 start.go:143] virtualization: kvm guest
	I1124 08:46:11.719049   19393 out.go:179] * [functional-014740] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:46:11.720406   19393 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:46:11.720406   19393 notify.go:221] Checking for updates...
	I1124 08:46:11.721700   19393 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:46:11.723058   19393 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:46:11.724430   19393 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:46:11.725668   19393 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:46:11.727020   19393 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:46:11.728756   19393 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:46:11.729435   19393 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:46:11.761441   19393 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1124 08:46:11.762608   19393 start.go:309] selected driver: kvm2
	I1124 08:46:11.762626   19393 start.go:927] validating driver "kvm2" against &{Name:functional-014740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-014740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:46:11.762766   19393 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:46:11.764721   19393 out.go:203] 
	W1124 08:46:11.766067   19393 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:46:11.767221   19393 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.357878533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974327357852150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4e60f4c-7cd9-4146-86ce-fb20f1fe8ffb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.358864726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d34f613-7ef4-4c2b-be85-e798072c6d78 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.358934794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d34f613-7ef4-4c2b-be85-e798072c6d78 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.359400514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d34f613-7ef4-4c2b-be85-e798072c6d78 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.397560812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=192034ed-3893-44c0-b075-3e61e4216649 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.397686484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=192034ed-3893-44c0-b075-3e61e4216649 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.399395640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f872dac-7f3c-41e8-ab53-93bcc0a85bf1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.400295626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974327400264759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f872dac-7f3c-41e8-ab53-93bcc0a85bf1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.401181614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9560e854-abc3-421c-8970-7541035d581c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.401572786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9560e854-abc3-421c-8970-7541035d581c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.402062877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9560e854-abc3-421c-8970-7541035d581c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.429718833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea64535f-91fc-4566-9969-b05e86f60289 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.430025394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea64535f-91fc-4566-9969-b05e86f60289 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.431869621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42deb360-5b8b-49d4-b5df-352aa06facdc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.433345151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974327433272977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42deb360-5b8b-49d4-b5df-352aa06facdc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.434149763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e39d1935-f498-496e-a92d-f4ecf1579301 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.434220418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e39d1935-f498-496e-a92d-f4ecf1579301 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.434591970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e39d1935-f498-496e-a92d-f4ecf1579301 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.472217307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e609ddcc-0db3-4779-b1fe-fb7b12564790 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.472314257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e609ddcc-0db3-4779-b1fe-fb7b12564790 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.473920572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4451f95-81df-4ae5-818b-6a8d1485e2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.474637093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974327474612836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4451f95-81df-4ae5-818b-6a8d1485e2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.475771906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a21ced62-3c16-4eb4-9b8e-72ecd66964fa name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.475987571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a21ced62-3c16-4eb4-9b8e-72ecd66964fa name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:52:07 functional-014740 crio[6620]: time="2025-11-24 08:52:07.476637107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a21ced62-3c16-4eb4-9b8e-72ecd66964fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d0856c189bb34       07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558                                        5 minutes ago       Running             kubernetes-dashboard      0                   a8d6edf3bbc03       kubernetes-dashboard-b84665fb8-sqj9h        kubernetes-dashboard
	319a0e58c733c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     5 minutes ago       Exited              mount-munger              0                   7096c40488158       busybox-mount                               default
	c23108845e5d9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   6 minutes ago       Running             echo-server               0                   7bc957dca0583       hello-node-connect-9f67c86d4-2f89s          default
	e349c015dabb5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   6 minutes ago       Running             echo-server               0                   2cba0736effb4       hello-node-5758569b79-pn7vs                 default
	e0b10832d9b2d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                        6 minutes ago       Running             kube-apiserver            1                   bc595842b180d       kube-apiserver-functional-014740            kube-system
	6ee113531524b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                        6 minutes ago       Running             coredns                   4                   e6fbbc8093f95       coredns-7d764666f9-l8z65                    kube-system
	4c57362e95194       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Running             storage-provisioner       4                   7396e141256f1       storage-provisioner                         kube-system
	7b7bf2a6b30c7       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                        6 minutes ago       Running             kube-proxy                4                   1a54571921266       kube-proxy-wkccz                            kube-system
	440d9d755f369       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                        6 minutes ago       Exited              kube-apiserver            0                   bc595842b180d       kube-apiserver-functional-014740            kube-system
	2e96d08b56208       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                        6 minutes ago       Running             kube-scheduler            4                   3d9df184706a9       kube-scheduler-functional-014740            kube-system
	0b14c29be6345       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                        6 minutes ago       Running             etcd                      3                   45d230cede02d       etcd-functional-014740                      kube-system
	7e3dacfdccc83       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                        6 minutes ago       Running             kube-controller-manager   3                   1699ff116b614       kube-controller-manager-functional-014740   kube-system
	8bcde7a85bd31       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                        6 minutes ago       Exited              coredns                   3                   e6fbbc8093f95       coredns-7d764666f9-l8z65                    kube-system
	6fa7c1a121ada       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Exited              storage-provisioner       3                   7396e141256f1       storage-provisioner                         kube-system
	18447548c9275       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                        6 minutes ago       Exited              kube-scheduler            3                   3d9df184706a9       kube-scheduler-functional-014740            kube-system
	36642cf50eed4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                        6 minutes ago       Exited              kube-proxy                3                   1a54571921266       kube-proxy-wkccz                            kube-system
	61106a13fac25       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                        7 minutes ago       Exited              kube-controller-manager   2                   2374a637b1a4f       kube-controller-manager-functional-014740   kube-system
	1c22b96cabedc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                        7 minutes ago       Exited              etcd                      2                   a3c535c59b0fb       etcd-functional-014740                      kube-system
	
	
	==> coredns [6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49749 - 65040 "HINFO IN 5910895563406453664.147485293763041281. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021961053s
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] plugin/kubernetes: Warning: watch ended with error
	
	
	==> coredns [8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52431 - 14646 "HINFO IN 6506770586005438145.4420967861921683414. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070676055s
	
	
	==> describe nodes <==
	Name:               functional-014740
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-014740
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-014740
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_43_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:43:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-014740
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:52:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    functional-014740
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a556ddd445bb4629a43a4a2ba031a750
	  System UUID:                a556ddd4-45bb-4629-a43a-4a2ba031a750
	  Boot ID:                    cff11134-5023-4717-bef0-866bf9423e1b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-pn7vs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     hello-node-connect-9f67c86d4-2f89s            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-844cf969f6-5jplz                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m52s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-7d764666f9-l8z65                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m33s
	  kube-system                 etcd-functional-014740                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m40s
	  kube-system                 kube-apiserver-functional-014740              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-functional-014740     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-wkccz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-scheduler-functional-014740              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-5ndqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-sqj9h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m35s  node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	  Normal  RegisteredNode  7m42s  node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	  Normal  RegisteredNode  7m16s  node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	  Normal  RegisteredNode  6m36s  node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	
	
	==> dmesg <==
	[  +1.181900] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov24 08:43] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.102142] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.094339] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.138724] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.175302] kauditd_printk_skb: 12 callbacks suppressed
	[Nov24 08:44] kauditd_printk_skb: 290 callbacks suppressed
	[  +2.370303] kauditd_printk_skb: 327 callbacks suppressed
	[  +2.250277] kauditd_printk_skb: 27 callbacks suppressed
	[  +3.010196] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.394773] kauditd_printk_skb: 42 callbacks suppressed
	[  +2.060958] kauditd_printk_skb: 45 callbacks suppressed
	[Nov24 08:45] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.789389] kauditd_printk_skb: 357 callbacks suppressed
	[  +0.416126] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.064444] kauditd_printk_skb: 104 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.000099] kauditd_printk_skb: 110 callbacks suppressed
	[  +3.976390] kauditd_printk_skb: 61 callbacks suppressed
	[  +6.574923] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.876823] crun[10963]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +5.910225] kauditd_printk_skb: 10 callbacks suppressed
	[ +25.112844] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9] <==
	{"level":"warn","ts":"2025-11-24T08:45:35.594811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.604480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.611809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.621350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.631422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.644769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.665071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.680774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.686468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.693867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.708501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.713538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.721258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.735746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.740588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.748809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.754314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.805586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56960","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:46:22.470710Z","caller":"traceutil/trace.go:172","msg":"trace[1265965208] linearizableReadLoop","detail":"{readStateIndex:1096; appliedIndex:1096; }","duration":"358.87068ms","start":"2025-11-24T08:46:22.111824Z","end":"2025-11-24T08:46:22.470694Z","steps":["trace[1265965208] 'read index received'  (duration: 358.866208ms)","trace[1265965208] 'applied index is now lower than readState.Index'  (duration: 3.584µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T08:46:22.470920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"359.050406ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:46:22.470954Z","caller":"traceutil/trace.go:172","msg":"trace[434063114] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1003; }","duration":"359.129732ms","start":"2025-11-24T08:46:22.111817Z","end":"2025-11-24T08:46:22.470947Z","steps":["trace[434063114] 'agreement among raft nodes before linearized reading'  (duration: 359.029343ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:46:22.471628Z","caller":"traceutil/trace.go:172","msg":"trace[449416632] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"399.689533ms","start":"2025-11-24T08:46:22.071926Z","end":"2025-11-24T08:46:22.471616Z","steps":["trace[449416632] 'process raft request'  (duration: 399.491024ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:46:22.472283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.857475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:799"}
	{"level":"info","ts":"2025-11-24T08:46:22.472366Z","caller":"traceutil/trace.go:172","msg":"trace[558324443] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:1004; }","duration":"177.946353ms","start":"2025-11-24T08:46:22.294405Z","end":"2025-11-24T08:46:22.472352Z","steps":["trace[558324443] 'agreement among raft nodes before linearized reading'  (duration: 177.738586ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:46:22.472784Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T08:46:22.071909Z","time spent":"399.840142ms","remote":"127.0.0.1:56202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1003 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> etcd [1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa] <==
	{"level":"warn","ts":"2025-11-24T08:44:47.475505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.482557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.495250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.504528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.512251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.520323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.578205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40104","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:45:14.991643Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T08:45:14.994512Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-014740","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.85:2380"],"advertise-client-urls":["https://192.168.39.85:2379"]}
	{"level":"error","ts":"2025-11-24T08:45:14.997154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:45:14.998390Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:45:15.077268Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077507Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:45:15.077878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077606Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.85:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077921Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.85:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:45:15.077979Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.85:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:45:15.077630Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f64f40e18c9d70e7","current-leader-member-id":"f64f40e18c9d70e7"}
	{"level":"info","ts":"2025-11-24T08:45:15.078065Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T08:45:15.078129Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T08:45:15.081251Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.85:2380"}
	{"level":"error","ts":"2025-11-24T08:45:15.081352Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.85:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:45:15.081372Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.85:2380"}
	{"level":"info","ts":"2025-11-24T08:45:15.081378Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-014740","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.85:2380"],"advertise-client-urls":["https://192.168.39.85:2379"]}
	
	
	==> kernel <==
	 08:52:07 up 9 min,  0 users,  load average: 0.17, 0.19, 0.14
	Linux functional-014740 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872] <==
	I1124 08:45:34.002332       1 options.go:263] external host was not specified, using 192.168.39.85
	I1124 08:45:34.026735       1 server.go:150] Version: v1.35.0-beta.0
	I1124 08:45:34.026865       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1124 08:45:34.035253       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b] <==
	I1124 08:45:36.930377       1 aggregator.go:187] initial CRD sync complete...
	I1124 08:45:36.930400       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 08:45:36.930406       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 08:45:36.930410       1 cache.go:39] Caches are synced for autoregister controller
	I1124 08:45:36.969615       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 08:45:36.979901       1 shared_informer.go:377] "Caches are synced"
	I1124 08:45:36.979957       1 policy_source.go:248] refreshing policies
	I1124 08:45:37.022548       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 08:45:37.315783       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1124 08:45:37.631934       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.85]
	I1124 08:45:37.633199       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 08:45:37.642832       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 08:45:39.579125       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 08:45:39.579385       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 08:45:39.593046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 08:45:55.786490       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.15.82"}
	I1124 08:45:55.968528       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 08:46:00.217463       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.221.21"}
	I1124 08:46:00.752917       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.96.10"}
	I1124 08:46:13.439443       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 08:46:13.674243       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 08:46:13.725693       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 08:46:14.000236       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.222.70"}
	I1124 08:46:14.049200       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.173.31"}
	I1124 08:46:15.691371       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.122.150"}
	
	
	==> kube-controller-manager [61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa] <==
	I1124 08:44:51.418693       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:44:51.418750       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.418874       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.419032       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.419195       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.419656       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.420962       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.421121       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.421620       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422006       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422113       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 08:44:51.422193       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-014740"
	I1124 08:44:51.422253       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1124 08:44:51.422551       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422603       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422707       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.424183       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.424250       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.425241       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:44:51.425692       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.440922       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.518325       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.518355       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 08:44:51.518360       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 08:44:51.525502       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [7e3dacfdccc83721a487570eda19927fafdca18865f8f621c08e3e157ca638ba] <==
	E1124 08:45:36.580188       1 reflector.go:204] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1124 08:45:36.580205       1 reflector.go:204] "Failed to watch" err="secrets is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"secrets\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Secret"
	E1124 08:45:36.580312       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope - error from a previous attempt: read tcp 192.168.39.85:40314->192.168.39.85:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 08:45:36.580210       1 reflector.go:204] "Failed to watch" err="resourcequotas is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"resourcequotas\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceQuota"
	E1124 08:45:36.580881       1 reflector.go:204] "Failed to watch" err="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"prioritylevelconfigurations\" in API group \"flowcontrol.apiserver.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PriorityLevelConfiguration"
	E1124 08:45:36.580910       1 reflector.go:204] "Failed to watch" err="leases.coordination.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"leases\" in API group \"coordination.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Lease"
	E1124 08:45:36.580960       1 reflector.go:204] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 08:45:36.614415       1 reflector.go:204] "Failed to watch" err="daemonsets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"daemonsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DaemonSet"
	E1124 08:45:36.614512       1 reflector.go:204] "Failed to watch" err="runtimeclasses.node.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.RuntimeClass"
	E1124 08:45:36.614556       1 reflector.go:204] "Failed to watch" err="priorityclasses.scheduling.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"priorityclasses\" in API group \"scheduling.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PriorityClass"
	E1124 08:45:36.614596       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 08:45:36.614634       1 reflector.go:204] "Failed to watch" err="serviceaccounts is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"serviceaccounts\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ServiceAccount"
	E1124 08:45:36.614732       1 reflector.go:204] "Failed to watch" err="pods is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:45:36.614748       1 reflector.go:204] "Failed to watch" err="limitranges is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"limitranges\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.LimitRange"
	E1124 08:45:36.617535       1 reflector.go:204] "Failed to watch" err="validatingadmissionpolicies.admissionregistration.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"validatingadmissionpolicies\" in API group \"admissionregistration.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ValidatingAdmissionPolicy"
	E1124 08:46:13.686957       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.697689       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.706144       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.729019       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.733387       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.761266       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.766326       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.773742       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.784906       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b] <==
	I1124 08:45:24.838472       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:45:28.039305       1 shared_informer.go:377] "Caches are synced"
	I1124 08:45:28.039362       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.85"]
	E1124 08:45:28.039434       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:45:28.074065       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 08:45:28.074161       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 08:45:28.074183       1 server_linux.go:136] "Using iptables Proxier"
	I1124 08:45:28.086657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:45:28.086993       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 08:45:28.087042       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:45:28.094952       1 config.go:309] "Starting node config controller"
	I1124 08:45:28.095007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:45:28.095025       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:45:28.095516       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:45:28.095559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:45:28.095653       1 config.go:200] "Starting service config controller"
	I1124 08:45:28.095927       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:45:28.095884       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:45:28.095976       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:45:28.196334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 08:45:28.196371       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:45:28.196376       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907] <==
	I1124 08:45:34.689667       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:45:36.991425       1 shared_informer.go:377] "Caches are synced"
	I1124 08:45:36.991484       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.85"]
	E1124 08:45:36.991585       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:45:37.050569       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 08:45:37.051187       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 08:45:37.051810       1 server_linux.go:136] "Using iptables Proxier"
	I1124 08:45:37.118469       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:45:37.120567       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 08:45:37.120811       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:45:37.123385       1 config.go:200] "Starting service config controller"
	I1124 08:45:37.123432       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:45:37.123458       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:45:37.123479       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:45:37.123499       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:45:37.123512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:45:37.123790       1 config.go:309] "Starting node config controller"
	I1124 08:45:37.123823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:45:37.224280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:45:37.224306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:45:37.224349       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:45:37.224365       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e] <==
	E1124 08:45:27.981278       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1124 08:45:27.981599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 08:45:27.981719       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1124 08:45:27.981776       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:45:27.981857       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1124 08:45:27.981948       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1124 08:45:27.981982       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1124 08:45:27.982005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 08:45:27.982044       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 08:45:27.982151       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 08:45:27.982176       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 08:45:27.982259       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 08:45:27.982302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 08:45:27.982340       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1124 08:45:27.984813       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 08:45:28.014036       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 08:45:28.014651       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1124 08:45:30.196537       1 shared_informer.go:377] "Caches are synced"
	E1124 08:45:30.711283       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1124 08:45:30.711590       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 08:45:30.711699       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 08:45:30.711711       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 08:45:30.711724       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 08:45:30.711909       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 08:45:30.711924       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf] <==
	E1124 08:45:36.621153       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1124 08:45:36.621188       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1124 08:45:36.623186       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 08:45:36.627350       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1124 08:45:36.618655       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1124 08:45:36.641258       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 08:45:36.763201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 08:45:36.763393       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 08:45:36.763445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1124 08:45:36.763492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1124 08:45:36.763526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 08:45:36.763580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 08:45:36.763611       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1124 08:45:36.763657       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:45:36.763721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1124 08:45:36.765614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 08:45:36.766334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1124 08:45:36.766371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 08:45:36.766411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1124 08:45:36.766453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 08:45:36.766499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 08:45:36.766529       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 08:45:36.766556       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 08:45:36.769142       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1124 08:45:36.983805       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 08:51:05 functional-014740 kubelet[7797]: E1124 08:51:05.453462    7797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-014740" containerName="kube-scheduler"
	Nov 24 08:51:07 functional-014740 kubelet[7797]: E1124 08:51:07.453503    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="619282a4-15ce-4cc0-b729-c1ded2320f4e"
	Nov 24 08:51:12 functional-014740 kubelet[7797]: E1124 08:51:12.748867    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974272748290928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:12 functional-014740 kubelet[7797]: E1124 08:51:12.748918    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974272748290928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:14 functional-014740 kubelet[7797]: E1124 08:51:14.454540    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-5jplz" podUID="7c67e2c1-1348-4d52-90ac-4e51eb6249c9"
	Nov 24 08:51:22 functional-014740 kubelet[7797]: E1124 08:51:22.454507    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="619282a4-15ce-4cc0-b729-c1ded2320f4e"
	Nov 24 08:51:22 functional-014740 kubelet[7797]: E1124 08:51:22.751979    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974282751369204  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:22 functional-014740 kubelet[7797]: E1124 08:51:22.752024    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974282751369204  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:28 functional-014740 kubelet[7797]: E1124 08:51:28.462150    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-5jplz" podUID="7c67e2c1-1348-4d52-90ac-4e51eb6249c9"
	Nov 24 08:51:32 functional-014740 kubelet[7797]: E1124 08:51:32.633062    7797 manager.go:1119] Failed to create existing container: /kubepods/burstable/podf67ca509a8eba059ef5d4ac292857e56/crio-a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba: Error finding container a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba: Status 404 returned error can't find the container with id a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba
	Nov 24 08:51:32 functional-014740 kubelet[7797]: E1124 08:51:32.633811    7797 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda475eee0758ad7236a89457ac4641eaa/crio-2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d: Error finding container 2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d: Status 404 returned error can't find the container with id 2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d
	Nov 24 08:51:32 functional-014740 kubelet[7797]: E1124 08:51:32.755684    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974292755018860  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:32 functional-014740 kubelet[7797]: E1124 08:51:32.755836    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974292755018860  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:35 functional-014740 kubelet[7797]: E1124 08:51:35.453999    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="619282a4-15ce-4cc0-b729-c1ded2320f4e"
	Nov 24 08:51:39 functional-014740 kubelet[7797]: E1124 08:51:39.455671    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-5jplz" podUID="7c67e2c1-1348-4d52-90ac-4e51eb6249c9"
	Nov 24 08:51:42 functional-014740 kubelet[7797]: E1124 08:51:42.758370    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974302757969820  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:42 functional-014740 kubelet[7797]: E1124 08:51:42.758390    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974302757969820  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:46 functional-014740 kubelet[7797]: E1124 08:51:46.453690    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="619282a4-15ce-4cc0-b729-c1ded2320f4e"
	Nov 24 08:51:50 functional-014740 kubelet[7797]: E1124 08:51:50.453818    7797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-sqj9h" containerName="kubernetes-dashboard"
	Nov 24 08:51:52 functional-014740 kubelet[7797]: E1124 08:51:52.760353    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974312759789051  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:52 functional-014740 kubelet[7797]: E1124 08:51:52.760387    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974312759789051  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:51:59 functional-014740 kubelet[7797]: E1124 08:51:59.452951    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="619282a4-15ce-4cc0-b729-c1ded2320f4e"
	Nov 24 08:52:00 functional-014740 kubelet[7797]: E1124 08:52:00.453814    7797 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-l8z65" containerName="coredns"
	Nov 24 08:52:02 functional-014740 kubelet[7797]: E1124 08:52:02.762888    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974322762415878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:52:02 functional-014740 kubelet[7797]: E1124 08:52:02.762926    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974322762415878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	
	
	==> kubernetes-dashboard [d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827] <==
	2025/11/24 08:46:29 Using namespace: kubernetes-dashboard
	2025/11/24 08:46:29 Using in-cluster config to connect to apiserver
	2025/11/24 08:46:29 Using secret token for csrf signing
	2025/11/24 08:46:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 08:46:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 08:46:29 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/11/24 08:46:29 Generating JWE encryption key
	2025/11/24 08:46:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 08:46:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 08:46:29 Initializing JWE encryption key from synchronized object
	2025/11/24 08:46:29 Creating in-cluster Sidecar client
	2025/11/24 08:46:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:46:29 Serving insecurely on HTTP port: 9090
	2025/11/24 08:46:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:47:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:47:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:48:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:48:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:49:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:49:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:50:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:50:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:51:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:52:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:46:29 Starting overwatch
	
	
	==> storage-provisioner [4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928] <==
	W1124 08:51:42.162823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:44.165562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:44.170528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:46.174213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:46.179901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:48.182985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:48.187749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:50.191427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:50.200043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:52.204580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:52.210208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:54.214721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:54.220432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:56.224666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:56.229425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:58.232557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:51:58.240550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:00.244867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:00.254323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:02.257968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:02.263434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:04.266323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:04.274690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:06.277606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:52:06.282678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f] <==
	I1124 08:45:24.822469       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 08:45:24.836569       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-014740 -n functional-014740
helpers_test.go:269: (dbg) Run:  kubectl --context functional-014740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-014740 describe pod busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-014740 describe pod busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr: exit status 1 (86.770293ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014740/192.168.39.85
	Start Time:       Mon, 24 Nov 2025 08:46:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 08:46:08 +0000
	      Finished:     Mon, 24 Nov 2025 08:46:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6x57 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n6x57:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m4s  default-scheduler  Successfully assigned default/busybox-mount to functional-014740
	  Normal  Pulling    6m4s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m1s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.277s (3.278s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m    kubelet            Container created
	  Normal  Started    6m    kubelet            Container started
	
	
	Name:             mysql-844cf969f6-5jplz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014740/192.168.39.85
	Start Time:       Mon, 24 Nov 2025 08:46:15 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r46jc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r46jc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m53s                 default-scheduler  Successfully assigned default/mysql-844cf969f6-5jplz to functional-014740
	  Warning  Failed     3m7s (x2 over 4m11s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x2 over 5m14s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x4 over 5m14s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    29s (x11 over 5m14s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     29s (x11 over 5m14s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    16s (x5 over 5m52s)   kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014740/192.168.39.85
	Start Time:       Mon, 24 Nov 2025 08:46:06 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5k24d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5k24d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-014740
	  Normal   Pulled     5m53s                kubelet            Successfully pulled image "docker.io/nginx" in 7.729s (8.804s including waiting). Image size: 155491845 bytes.
	  Warning  Failed     5m36s                kubelet            Error: container create failed: time="2025-11-24T08:46:15Z" level=error msg="runc create failed: unable to start container process: error during container init: exec: \"/docker-entrypoint.sh\": stat /docker-entrypoint.sh: no such file or directory"
	  Normal   Pulling    104s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     72s (x4 over 4m42s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x4 over 4m42s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x10 over 4m42s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x10 over 4m42s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-5ndqr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-014740 describe pod busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr: exit status 1
E1124 08:52:23.207931    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:53:07.682105    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-014740 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-5jplz" [7c67e2c1-1348-4d52-90ac-4e51eb6249c9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-014740 -n functional-014740
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-11-24 08:56:15.990403776 +0000 UTC m=+1655.375751251
functional_test.go:1804: (dbg) Run:  kubectl --context functional-014740 describe po mysql-844cf969f6-5jplz -n default
functional_test.go:1804: (dbg) kubectl --context functional-014740 describe po mysql-844cf969f6-5jplz -n default:
Name:             mysql-844cf969f6-5jplz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-014740/192.168.39.85
Start Time:       Mon, 24 Nov 2025 08:46:15 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r46jc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r46jc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-844cf969f6-5jplz to functional-014740
Warning  Failed     5m53s (x2 over 9m22s)   kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m24s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     3m52s (x5 over 9m22s)   kubelet            Error: ErrImagePull
Warning  Failed     3m52s (x3 over 8m19s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m49s (x16 over 9m22s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    107s (x21 over 9m22s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-014740 logs mysql-844cf969f6-5jplz -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-014740 logs mysql-844cf969f6-5jplz -n default: exit status 1 (70.917201ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-5jplz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-014740 logs mysql-844cf969f6-5jplz -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-014740 -n functional-014740
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 logs -n 25: (1.392298416s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-014740 ssh sudo cat /etc/ssl/certs/96292.pem                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh sudo cat /usr/share/ca-certificates/96292.pem                                                                                          │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh sudo cat /etc/test/nested/copy/9629/hosts                                                                                              │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr                                                                │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr                                                                │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image save kicbase/echo-server:functional-014740 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image rm kicbase/echo-server:functional-014740 --alsologtostderr                                                                           │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image save --daemon kicbase/echo-server:functional-014740 --alsologtostderr                                                                │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ update-context │ functional-014740 update-context --alsologtostderr -v=2                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ update-context │ functional-014740 update-context --alsologtostderr -v=2                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ update-context │ functional-014740 update-context --alsologtostderr -v=2                                                                                                      │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format short --alsologtostderr                                                                                                  │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format yaml --alsologtostderr                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ ssh            │ functional-014740 ssh pgrep buildkitd                                                                                                                        │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │                     │
	│ image          │ functional-014740 image build -t localhost/my-image:functional-014740 testdata/build --alsologtostderr                                                       │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls                                                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format json --alsologtostderr                                                                                                   │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	│ image          │ functional-014740 image ls --format table --alsologtostderr                                                                                                  │ functional-014740 │ jenkins │ v1.37.0 │ 24 Nov 25 08:46 UTC │ 24 Nov 25 08:46 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:46:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:46:11.714620   19393 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:46:11.714773   19393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.714785   19393 out.go:374] Setting ErrFile to fd 2...
	I1124 08:46:11.714792   19393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.715248   19393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:46:11.715862   19393 out.go:368] Setting JSON to false
	I1124 08:46:11.717025   19393 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1708,"bootTime":1763972264,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:46:11.717101   19393 start.go:143] virtualization: kvm guest
	I1124 08:46:11.719049   19393 out.go:179] * [functional-014740] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:46:11.720406   19393 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:46:11.720406   19393 notify.go:221] Checking for updates...
	I1124 08:46:11.721700   19393 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:46:11.723058   19393 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:46:11.724430   19393 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:46:11.725668   19393 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:46:11.727020   19393 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:46:11.728756   19393 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:46:11.729435   19393 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:46:11.761441   19393 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1124 08:46:11.762608   19393 start.go:309] selected driver: kvm2
	I1124 08:46:11.762626   19393 start.go:927] validating driver "kvm2" against &{Name:functional-014740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-014740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:46:11.762766   19393 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:46:11.764721   19393 out.go:203] 
	W1124 08:46:11.766067   19393 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:46:11.767221   19393 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.744476349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974576744451022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5eb8ab4-d0f2-4a64-ae5e-5d5efe6cfc35 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.745478332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c4365ff-163b-4258-846e-5090e64f1da8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.745546816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c4365ff-163b-4258-846e-5090e64f1da8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.745920194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c4365ff-163b-4258-846e-5090e64f1da8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.787843295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba3a841b-3c48-4164-9d17-f8d64c10df62 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.787945538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba3a841b-3c48-4164-9d17-f8d64c10df62 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.789894280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f833c4df-bcd0-47bc-8942-60e9ab26e975 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.790609163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974576790585502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f833c4df-bcd0-47bc-8942-60e9ab26e975 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.791457553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=586123e8-4b87-47a4-a16f-bdb801dc75f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.791546367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=586123e8-4b87-47a4-a16f-bdb801dc75f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.791970120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=586123e8-4b87-47a4-a16f-bdb801dc75f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.821052620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cc24837-fa3a-494f-818a-630954c62326 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.821420000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cc24837-fa3a-494f-818a-630954c62326 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.822584793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1dd719c0-64d4-47e1-aa83-3a6d86361263 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.823368625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974576823344057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dd719c0-64d4-47e1-aa83-3a6d86361263 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.824350894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eba264f-0c9b-436d-98ca-8e090c8fe7a9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.824418755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eba264f-0c9b-436d-98ca-8e090c8fe7a9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.824770185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eba264f-0c9b-436d-98ca-8e090c8fe7a9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.862414839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38ea1ccd-db91-4e09-a169-51dd5e9591a5 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.862499223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38ea1ccd-db91-4e09-a169-51dd5e9591a5 name=/runtime.v1.RuntimeService/Version
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.864152523Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5dcff7a-fe2c-43cd-ac56-8982acd0abb5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.865192830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763974576865166395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:227769,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5dcff7a-fe2c-43cd-ac56-8982acd0abb5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.866320842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ffd1b13-a128-4392-9cb4-fa9b5dee670a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.866391916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ffd1b13-a128-4392-9cb4-fa9b5dee670a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 08:56:16 functional-014740 crio[6620]: time="2025-11-24 08:56:16.866918629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827,PodSandboxId:a8d6edf3bbc03d000600a1804bb0d5929a88da6611e76e3fea53c633de236f30,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1763973989470731217,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-sqj9h,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 1586397e-e02d-491c-9634-72d82040c794,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports
: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95,PodSandboxId:7096c404881589ea1ac75170031d87ef95a7753e66d7967a2636c175cb1123db,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763973968010381256,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58be346b-a3c6-494c-864a-b5b43f398892,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23108845e5d9b86415fd3bd6e6b6c033662a26dd2d57c085ae7744c1eee2f9d,PodSandboxId:7bc957dca058357d3e6cda45d341068b65a1b26aedb297529c7e1db7f85ef2e1,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973964023713935,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-2f89s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b617478e-9821-4355-a955-f4a6ffbf53b1,},A
nnotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e349c015dabb53aa1f6c831c1efb38ba21bf5d3f6ecb1dac229e01239920a3fe,PodSandboxId:2cba0736effb478955b555dbe34e807870430b921c92543502d4b8ed9275d0d9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763973963139907164,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-pn7vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eca6c3f4-81b7-46d0-ac96-127
a71d45d64,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1763973934856300448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},A
nnotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1763973933754258780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763973933763525929,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:4,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1763973933870788273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872,PodSandboxId:bc595842b180dea283ea53f7da9af7cd586b134a45866972847476395abc852a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_EXITED,CreatedAt:1763973933385655914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4c8cd1351633d551fe03012af88f98,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,}
,Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1763973933150325918,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9,PodSandboxId:45d230cede02d4
e599ed94ff3887ff3d9397aa40c27c1abec2e9b797f5d2607c,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763973926076725564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3dacfdccc837
21a487570eda19927fafdca18865f8f621c08e3e157ca638ba,PodSandboxId:1699ff116b614d623f26be2a573175c72c43b5863fd11385d46126845eec4a9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1763973926064239601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1,PodSandboxId:e6fbbc8093f95c18a042e5d4a9dd0c2a997699fc1de7dc9b74e52ab485cbe4d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1763973925088993447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-l8z65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81210d26-fe1e-4eea-9c9d-a598fe952ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f,PodSandboxId:7396e141256f1d08a13da7ba4df2c246eaa27523d28ab8443bdf37f9695097a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763973924522557491,Labels:map[string]string{io.kubernetes.container.name: storage-pro
visioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a411fdbe-a695-4b8a-87a7-4e059588cd68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e,PodSandboxId:3d9df184706a908be54eb4b95afd0863a8ef3c5b6773e6f14cd97f8c3217d828,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1763973924306060413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.
pod.name: kube-scheduler-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bf0f42bfcad3d619059fe2db42be730,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b,PodSandboxId:1a54571921266f537602bfe3f8f378db1fe79f85970f9fb684b77ecfa1b57243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt
:1763973924184074191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkccz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daada5e4-0796-4a61-8316-8c5037014789,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa,PodSandboxId:2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1763973885940262838,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a475eee0758ad7236a89457ac4641eaa,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa,PodSandboxId:a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3
e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1763973885931178430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-014740,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67ca509a8eba059ef5d4ac292857e56,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ffd1b13-a128-4392-9cb4-fa9b5dee670a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d0856c189bb34       07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558                                        9 minutes ago       Running             kubernetes-dashboard      0                   a8d6edf3bbc03       kubernetes-dashboard-b84665fb8-sqj9h        kubernetes-dashboard
	319a0e58c733c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     10 minutes ago      Exited              mount-munger              0                   7096c40488158       busybox-mount                               default
	c23108845e5d9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   10 minutes ago      Running             echo-server               0                   7bc957dca0583       hello-node-connect-9f67c86d4-2f89s          default
	e349c015dabb5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   10 minutes ago      Running             echo-server               0                   2cba0736effb4       hello-node-5758569b79-pn7vs                 default
	e0b10832d9b2d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                        10 minutes ago      Running             kube-apiserver            1                   bc595842b180d       kube-apiserver-functional-014740            kube-system
	6ee113531524b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                        10 minutes ago      Running             coredns                   4                   e6fbbc8093f95       coredns-7d764666f9-l8z65                    kube-system
	4c57362e95194       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Running             storage-provisioner       4                   7396e141256f1       storage-provisioner                         kube-system
	7b7bf2a6b30c7       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                        10 minutes ago      Running             kube-proxy                4                   1a54571921266       kube-proxy-wkccz                            kube-system
	440d9d755f369       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                        10 minutes ago      Exited              kube-apiserver            0                   bc595842b180d       kube-apiserver-functional-014740            kube-system
	2e96d08b56208       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                        10 minutes ago      Running             kube-scheduler            4                   3d9df184706a9       kube-scheduler-functional-014740            kube-system
	0b14c29be6345       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                        10 minutes ago      Running             etcd                      3                   45d230cede02d       etcd-functional-014740                      kube-system
	7e3dacfdccc83       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                        10 minutes ago      Running             kube-controller-manager   3                   1699ff116b614       kube-controller-manager-functional-014740   kube-system
	8bcde7a85bd31       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                        10 minutes ago      Exited              coredns                   3                   e6fbbc8093f95       coredns-7d764666f9-l8z65                    kube-system
	6fa7c1a121ada       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Exited              storage-provisioner       3                   7396e141256f1       storage-provisioner                         kube-system
	18447548c9275       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                        10 minutes ago      Exited              kube-scheduler            3                   3d9df184706a9       kube-scheduler-functional-014740            kube-system
	36642cf50eed4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                        10 minutes ago      Exited              kube-proxy                3                   1a54571921266       kube-proxy-wkccz                            kube-system
	61106a13fac25       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                        11 minutes ago      Exited              kube-controller-manager   2                   2374a637b1a4f       kube-controller-manager-functional-014740   kube-system
	1c22b96cabedc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                        11 minutes ago      Exited              etcd                      2                   a3c535c59b0fb       etcd-functional-014740                      kube-system
	
	
	==> coredns [6ee113531524b3ff023c6448e57c47041fc9cd8ea3ef86aecfff22b1879338c0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49749 - 65040 "HINFO IN 5910895563406453664.147485293763041281. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021961053s
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] plugin/kubernetes: Warning: watch ended with error
	[INFO] plugin/kubernetes: Warning: watch ended with error
	
	
	==> coredns [8bcde7a85bd317214e272fea0ff8f831f54c0b6b6a159459ae33fd015d5075f1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52431 - 14646 "HINFO IN 6506770586005438145.4420967861921683414. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070676055s
	
	
	==> describe nodes <==
	Name:               functional-014740
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-014740
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-014740
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T08_43_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 08:43:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-014740
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 08:56:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 08:51:29 +0000   Mon, 24 Nov 2025 08:43:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    functional-014740
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a556ddd445bb4629a43a4a2ba031a750
	  System UUID:                a556ddd4-45bb-4629-a43a-4a2ba031a750
	  Boot ID:                    cff11134-5023-4717-bef0-866bf9423e1b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-pn7vs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-2f89s            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-5jplz                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-l8z65                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-014740                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-014740              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-014740     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wkccz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-014740              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-5ndqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-sqj9h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-014740 event: Registered Node functional-014740 in Controller
	
	
	==> dmesg <==
	[  +1.181900] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov24 08:43] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.102142] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.094339] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.138724] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.175302] kauditd_printk_skb: 12 callbacks suppressed
	[Nov24 08:44] kauditd_printk_skb: 290 callbacks suppressed
	[  +2.370303] kauditd_printk_skb: 327 callbacks suppressed
	[  +2.250277] kauditd_printk_skb: 27 callbacks suppressed
	[  +3.010196] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.394773] kauditd_printk_skb: 42 callbacks suppressed
	[  +2.060958] kauditd_printk_skb: 45 callbacks suppressed
	[Nov24 08:45] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.789389] kauditd_printk_skb: 357 callbacks suppressed
	[  +0.416126] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.064444] kauditd_printk_skb: 104 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.000099] kauditd_printk_skb: 110 callbacks suppressed
	[  +3.976390] kauditd_printk_skb: 61 callbacks suppressed
	[  +6.574923] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.876823] crun[10963]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +5.910225] kauditd_printk_skb: 10 callbacks suppressed
	[ +25.112844] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [0b14c29be6345993ce700edeae4d0971a7fbbf5639811730fc7db7c8d9b164a9] <==
	{"level":"warn","ts":"2025-11-24T08:45:35.621350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.631422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.644769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.665071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.680774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.686468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.693867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.708501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.713538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.721258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.735746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.740588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.748809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.754314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:45:35.805586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56960","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:46:22.470710Z","caller":"traceutil/trace.go:172","msg":"trace[1265965208] linearizableReadLoop","detail":"{readStateIndex:1096; appliedIndex:1096; }","duration":"358.87068ms","start":"2025-11-24T08:46:22.111824Z","end":"2025-11-24T08:46:22.470694Z","steps":["trace[1265965208] 'read index received'  (duration: 358.866208ms)","trace[1265965208] 'applied index is now lower than readState.Index'  (duration: 3.584µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T08:46:22.470920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"359.050406ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T08:46:22.470954Z","caller":"traceutil/trace.go:172","msg":"trace[434063114] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1003; }","duration":"359.129732ms","start":"2025-11-24T08:46:22.111817Z","end":"2025-11-24T08:46:22.470947Z","steps":["trace[434063114] 'agreement among raft nodes before linearized reading'  (duration: 359.029343ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T08:46:22.471628Z","caller":"traceutil/trace.go:172","msg":"trace[449416632] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"399.689533ms","start":"2025-11-24T08:46:22.071926Z","end":"2025-11-24T08:46:22.471616Z","steps":["trace[449416632] 'process raft request'  (duration: 399.491024ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:46:22.472283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.857475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:799"}
	{"level":"info","ts":"2025-11-24T08:46:22.472366Z","caller":"traceutil/trace.go:172","msg":"trace[558324443] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:1004; }","duration":"177.946353ms","start":"2025-11-24T08:46:22.294405Z","end":"2025-11-24T08:46:22.472352Z","steps":["trace[558324443] 'agreement among raft nodes before linearized reading'  (duration: 177.738586ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T08:46:22.472784Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T08:46:22.071909Z","time spent":"399.840142ms","remote":"127.0.0.1:56202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1003 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-11-24T08:55:35.200177Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1281}
	{"level":"info","ts":"2025-11-24T08:55:35.224982Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1281,"took":"24.359168ms","hash":1416342850,"current-db-size-bytes":3940352,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1978368,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-11-24T08:55:35.225065Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1416342850,"revision":1281,"compact-revision":-1}
	
	
	==> etcd [1c22b96cabedc52573856ebbff845e87d49e9b28c723756c10551b68cbb43afa] <==
	{"level":"warn","ts":"2025-11-24T08:44:47.475505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.482557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.495250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.504528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.512251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.520323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T08:44:47.578205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40104","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T08:45:14.991643Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T08:45:14.994512Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-014740","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.85:2380"],"advertise-client-urls":["https://192.168.39.85:2379"]}
	{"level":"error","ts":"2025-11-24T08:45:14.997154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:45:14.998390Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T08:45:15.077268Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077507Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:45:15.077878Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077606Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.85:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T08:45:15.077921Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.85:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T08:45:15.077979Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.85:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:45:15.077630Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f64f40e18c9d70e7","current-leader-member-id":"f64f40e18c9d70e7"}
	{"level":"info","ts":"2025-11-24T08:45:15.078065Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T08:45:15.078129Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T08:45:15.081251Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.85:2380"}
	{"level":"error","ts":"2025-11-24T08:45:15.081352Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.85:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T08:45:15.081372Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.85:2380"}
	{"level":"info","ts":"2025-11-24T08:45:15.081378Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-014740","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.85:2380"],"advertise-client-urls":["https://192.168.39.85:2379"]}
	
	
	==> kernel <==
	 08:56:17 up 13 min,  0 users,  load average: 0.22, 0.26, 0.18
	Linux functional-014740 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [440d9d755f369ff570336784ce937624503b8aef60af78ef8ae80f04c8952872] <==
	I1124 08:45:34.002332       1 options.go:263] external host was not specified, using 192.168.39.85
	I1124 08:45:34.026735       1 server.go:150] Version: v1.35.0-beta.0
	I1124 08:45:34.026865       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1124 08:45:34.035253       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [e0b10832d9b2d7dbd3f989870ad9cbbbcd5c0d0aa66839571f1cafc8988b515b] <==
	I1124 08:45:36.930400       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 08:45:36.930406       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 08:45:36.930410       1 cache.go:39] Caches are synced for autoregister controller
	I1124 08:45:36.969615       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 08:45:36.979901       1 shared_informer.go:377] "Caches are synced"
	I1124 08:45:36.979957       1 policy_source.go:248] refreshing policies
	I1124 08:45:37.022548       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 08:45:37.315783       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1124 08:45:37.631934       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.85]
	I1124 08:45:37.633199       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 08:45:37.642832       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 08:45:39.579125       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 08:45:39.579385       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 08:45:39.593046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 08:45:55.786490       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.15.82"}
	I1124 08:45:55.968528       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 08:46:00.217463       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.221.21"}
	I1124 08:46:00.752917       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.96.10"}
	I1124 08:46:13.439443       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 08:46:13.674243       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 08:46:13.725693       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 08:46:14.000236       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.222.70"}
	I1124 08:46:14.049200       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.173.31"}
	I1124 08:46:15.691371       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.122.150"}
	I1124 08:55:36.822137       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [61106a13fac2562428cac0e705d414ab2261037f4226ede30b9cb29c1208d9aa] <==
	I1124 08:44:51.418693       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:44:51.418750       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.418874       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.419032       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.419195       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.419656       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.420962       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.421121       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.421620       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422006       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422113       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1124 08:44:51.422193       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-014740"
	I1124 08:44:51.422253       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1124 08:44:51.422551       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422603       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.422707       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.424183       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.424250       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.425241       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:44:51.425692       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.440922       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.518325       1 shared_informer.go:377] "Caches are synced"
	I1124 08:44:51.518355       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 08:44:51.518360       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 08:44:51.525502       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [7e3dacfdccc83721a487570eda19927fafdca18865f8f621c08e3e157ca638ba] <==
	E1124 08:45:36.580188       1 reflector.go:204] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1124 08:45:36.580205       1 reflector.go:204] "Failed to watch" err="secrets is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"secrets\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Secret"
	E1124 08:45:36.580312       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope - error from a previous attempt: read tcp 192.168.39.85:40314->192.168.39.85:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 08:45:36.580210       1 reflector.go:204] "Failed to watch" err="resourcequotas is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"resourcequotas\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceQuota"
	E1124 08:45:36.580881       1 reflector.go:204] "Failed to watch" err="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"prioritylevelconfigurations\" in API group \"flowcontrol.apiserver.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PriorityLevelConfiguration"
	E1124 08:45:36.580910       1 reflector.go:204] "Failed to watch" err="leases.coordination.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"leases\" in API group \"coordination.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Lease"
	E1124 08:45:36.580960       1 reflector.go:204] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 08:45:36.614415       1 reflector.go:204] "Failed to watch" err="daemonsets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"daemonsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DaemonSet"
	E1124 08:45:36.614512       1 reflector.go:204] "Failed to watch" err="runtimeclasses.node.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.RuntimeClass"
	E1124 08:45:36.614556       1 reflector.go:204] "Failed to watch" err="priorityclasses.scheduling.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"priorityclasses\" in API group \"scheduling.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PriorityClass"
	E1124 08:45:36.614596       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 08:45:36.614634       1 reflector.go:204] "Failed to watch" err="serviceaccounts is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"serviceaccounts\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ServiceAccount"
	E1124 08:45:36.614732       1 reflector.go:204] "Failed to watch" err="pods is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:45:36.614748       1 reflector.go:204] "Failed to watch" err="limitranges is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"limitranges\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.LimitRange"
	E1124 08:45:36.617535       1 reflector.go:204] "Failed to watch" err="validatingadmissionpolicies.admissionregistration.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"validatingadmissionpolicies\" in API group \"admissionregistration.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ValidatingAdmissionPolicy"
	E1124 08:46:13.686957       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.697689       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.706144       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.729019       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.733387       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.761266       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.766326       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.773742       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 08:46:13.784906       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [36642cf50eed407e625237982f158c40fee31bf4f30101e2564d2f0f2c4a074b] <==
	I1124 08:45:24.838472       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:45:28.039305       1 shared_informer.go:377] "Caches are synced"
	I1124 08:45:28.039362       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.85"]
	E1124 08:45:28.039434       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:45:28.074065       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 08:45:28.074161       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 08:45:28.074183       1 server_linux.go:136] "Using iptables Proxier"
	I1124 08:45:28.086657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:45:28.086993       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 08:45:28.087042       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:45:28.094952       1 config.go:309] "Starting node config controller"
	I1124 08:45:28.095007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:45:28.095025       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:45:28.095516       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:45:28.095559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:45:28.095653       1 config.go:200] "Starting service config controller"
	I1124 08:45:28.095927       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:45:28.095884       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:45:28.095976       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:45:28.196334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 08:45:28.196371       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:45:28.196376       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [7b7bf2a6b30c708ac287340205e25885fb2b52505e04b552099d4f1d5c4b6907] <==
	I1124 08:45:34.689667       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 08:45:36.991425       1 shared_informer.go:377] "Caches are synced"
	I1124 08:45:36.991484       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.85"]
	E1124 08:45:36.991585       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 08:45:37.050569       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 08:45:37.051187       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 08:45:37.051810       1 server_linux.go:136] "Using iptables Proxier"
	I1124 08:45:37.118469       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 08:45:37.120567       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 08:45:37.120811       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 08:45:37.123385       1 config.go:200] "Starting service config controller"
	I1124 08:45:37.123432       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 08:45:37.123458       1 config.go:106] "Starting endpoint slice config controller"
	I1124 08:45:37.123479       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 08:45:37.123499       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 08:45:37.123512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 08:45:37.123790       1 config.go:309] "Starting node config controller"
	I1124 08:45:37.123823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 08:45:37.224280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 08:45:37.224306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 08:45:37.224349       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 08:45:37.224365       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [18447548c927507b7c1c8d8aee2fbf641876ee82ffa6220fe08bfe11aacb104e] <==
	E1124 08:45:27.981278       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1124 08:45:27.981599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 08:45:27.981719       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1124 08:45:27.981776       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:45:27.981857       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1124 08:45:27.981948       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1124 08:45:27.981982       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1124 08:45:27.982005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 08:45:27.982044       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 08:45:27.982151       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 08:45:27.982176       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 08:45:27.982259       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 08:45:27.982302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 08:45:27.982340       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1124 08:45:27.984813       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 08:45:28.014036       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 08:45:28.014651       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1124 08:45:30.196537       1 shared_informer.go:377] "Caches are synced"
	E1124 08:45:30.711283       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1124 08:45:30.711590       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 08:45:30.711699       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 08:45:30.711711       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 08:45:30.711724       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 08:45:30.711909       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 08:45:30.711924       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2e96d08b56208dcf8ec2b89f439f7245a9687274350f79218e30801fccfb74cf] <==
	E1124 08:45:36.621153       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1124 08:45:36.621188       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1124 08:45:36.623186       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 08:45:36.627350       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1124 08:45:36.618655       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1124 08:45:36.641258       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 08:45:36.763201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 08:45:36.763393       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 08:45:36.763445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1124 08:45:36.763492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1124 08:45:36.763526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 08:45:36.763580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 08:45:36.763611       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1124 08:45:36.763657       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 08:45:36.763721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1124 08:45:36.765614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1124 08:45:36.766334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1124 08:45:36.766371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 08:45:36.766411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1124 08:45:36.766453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 08:45:36.766499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 08:45:36.766529       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 08:45:36.766556       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 08:45:36.769142       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1124 08:45:36.983805       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 08:55:32 functional-014740 kubelet[7797]: E1124 08:55:32.630468    7797 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda475eee0758ad7236a89457ac4641eaa/crio-2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d: Error finding container 2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d: Status 404 returned error can't find the container with id 2374a637b1a4fd1f98df040faea373ee124af41b14cf1c8d854b973384418d4d
	Nov 24 08:55:32 functional-014740 kubelet[7797]: E1124 08:55:32.630855    7797 manager.go:1119] Failed to create existing container: /kubepods/burstable/podf67ca509a8eba059ef5d4ac292857e56/crio-a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba: Error finding container a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba: Status 404 returned error can't find the container with id a3c535c59b0fba8601c7c524bd2b01d3abca4f37912e04fd577d666764cf72ba
	Nov 24 08:55:32 functional-014740 kubelet[7797]: E1124 08:55:32.818476    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974532817933948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:55:32 functional-014740 kubelet[7797]: E1124 08:55:32.818505    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974532817933948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:55:33 functional-014740 kubelet[7797]: E1124 08:55:33.453836    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="619282a4-15ce-4cc0-b729-c1ded2320f4e"
	Nov 24 08:55:38 functional-014740 kubelet[7797]: E1124 08:55:38.452816    7797 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-l8z65" containerName="coredns"
	Nov 24 08:55:39 functional-014740 kubelet[7797]: E1124 08:55:39.453357    7797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-sqj9h" containerName="kubernetes-dashboard"
	Nov 24 08:55:41 functional-014740 kubelet[7797]: E1124 08:55:41.230628    7797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Nov 24 08:55:41 functional-014740 kubelet[7797]: E1124 08:55:41.230702    7797 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Nov 24 08:55:41 functional-014740 kubelet[7797]: E1124 08:55:41.230975    7797 kuberuntime_manager.go:1664] "Unhandled Error" err="container mysql start failed in pod mysql-844cf969f6-5jplz_default(7c67e2c1-1348-4d52-90ac-4e51eb6249c9): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 24 08:55:41 functional-014740 kubelet[7797]: E1124 08:55:41.231012    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-5jplz" podUID="7c67e2c1-1348-4d52-90ac-4e51eb6249c9"
	Nov 24 08:55:42 functional-014740 kubelet[7797]: E1124 08:55:42.821581    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974542820967456  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:55:42 functional-014740 kubelet[7797]: E1124 08:55:42.821616    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974542820967456  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:55:46 functional-014740 kubelet[7797]: E1124 08:55:46.453696    7797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-014740" containerName="kube-controller-manager"
	Nov 24 08:55:52 functional-014740 kubelet[7797]: E1124 08:55:52.823973    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974552823585098  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:55:52 functional-014740 kubelet[7797]: E1124 08:55:52.823996    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974552823585098  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:55:55 functional-014740 kubelet[7797]: E1124 08:55:55.457276    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-5jplz" podUID="7c67e2c1-1348-4d52-90ac-4e51eb6249c9"
	Nov 24 08:55:57 functional-014740 kubelet[7797]: E1124 08:55:57.453618    7797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-014740" containerName="kube-scheduler"
	Nov 24 08:56:02 functional-014740 kubelet[7797]: E1124 08:56:02.827068    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974562826464785  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:56:02 functional-014740 kubelet[7797]: E1124 08:56:02.827131    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974562826464785  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:56:06 functional-014740 kubelet[7797]: E1124 08:56:06.453335    7797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-014740" containerName="kube-apiserver"
	Nov 24 08:56:10 functional-014740 kubelet[7797]: E1124 08:56:10.456768    7797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-5jplz" podUID="7c67e2c1-1348-4d52-90ac-4e51eb6249c9"
	Nov 24 08:56:12 functional-014740 kubelet[7797]: E1124 08:56:12.829023    7797 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763974572828600669  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:56:12 functional-014740 kubelet[7797]: E1124 08:56:12.829048    7797 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763974572828600669  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:227769}  inodes_used:{value:106}}"
	Nov 24 08:56:16 functional-014740 kubelet[7797]: E1124 08:56:16.453465    7797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-014740" containerName="etcd"
	
	
	==> kubernetes-dashboard [d0856c189bb341f7c7de464886032d7dd61d9a44bb0ad24aa3306f4a6f9ce827] <==
	2025/11/24 08:46:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 08:46:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 08:46:29 Initializing JWE encryption key from synchronized object
	2025/11/24 08:46:29 Creating in-cluster Sidecar client
	2025/11/24 08:46:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:46:29 Serving insecurely on HTTP port: 9090
	2025/11/24 08:46:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:47:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:47:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:48:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:48:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:49:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:49:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:50:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:50:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:51:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:52:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:52:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:53:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:53:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:54:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:54:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:55:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:55:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 08:56:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4c57362e95194aaa3ff4a02db52123d012b93995fb28c6589c07b0b83a935928] <==
	W1124 08:55:53.459951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:55:55.463838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:55:55.474514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:55:57.477885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:55:57.484072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:55:59.488327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:55:59.496528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:01.500048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:01.505724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:03.510320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:03.520018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:05.523615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:05.534157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:07.538007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:07.543854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:09.547558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:09.556623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:11.559838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:11.565061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:13.568455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:13.573784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:15.577683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:15.582200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:17.586210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 08:56:17.594449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6fa7c1a121ada4844809ab52d7f94c4f85fd273503f056ccb2d8e1b98c68d64f] <==
	I1124 08:45:24.822469       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 08:45:24.836569       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-014740 -n functional-014740
helpers_test.go:269: (dbg) Run:  kubectl --context functional-014740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-014740 describe pod busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-014740 describe pod busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr: exit status 1 (89.490829ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014740/192.168.39.85
	Start Time:       Mon, 24 Nov 2025 08:46:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://319a0e58c733ccd1ffb81a38a57ee7f54a0b9882f76a4724517c25eabb12ca95
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 08:46:08 +0000
	      Finished:     Mon, 24 Nov 2025 08:46:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6x57 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n6x57:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-014740
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.277s (3.278s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             mysql-844cf969f6-5jplz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014740/192.168.39.85
	Start Time:       Mon, 24 Nov 2025 08:46:15 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r46jc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r46jc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-844cf969f6-5jplz to functional-014740
	  Warning  Failed     5m55s (x2 over 9m24s)   kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m26s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m54s (x5 over 9m24s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m54s (x3 over 8m21s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s (x16 over 9m24s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    109s (x21 over 9m24s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014740/192.168.39.85
	Start Time:       Mon, 24 Nov 2025 08:46:06 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5k24d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5k24d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/sp-pod to functional-014740
	  Normal   Pulled     10m                     kubelet            Successfully pulled image "docker.io/nginx" in 7.729s (8.804s including waiting). Image size: 155491845 bytes.
	  Warning  Failed     9m46s                   kubelet            Error: container create failed: time="2025-11-24T08:46:15Z" level=error msg="runc create failed: unable to start container process: error during container init: exec: \"/docker-entrypoint.sh\": stat /docker-entrypoint.sh: no such file or directory"
	  Normal   Pulling    3m51s (x6 over 10m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m19s (x5 over 8m52s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m19s (x5 over 8m52s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m28s (x15 over 8m52s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    98s (x19 over 8m52s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-5ndqr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-014740 describe pod busybox-mount mysql-844cf969f6-5jplz sp-pod dashboard-metrics-scraper-5565989548-5ndqr: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.59s)

                                                
                                    
x
+
TestPreload (153.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-680081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1124 09:34:03.104670    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-680081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m30.158598637s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-680081 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-680081 image pull gcr.io/k8s-minikube/busybox: (3.797025161s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-680081
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-680081: (6.862031006s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-680081 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1124 09:36:00.044284    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-680081 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (49.555076451s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-680081 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-24 09:36:30.298823045 +0000 UTC m=+4069.684170521
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-680081 -n test-preload-680081
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-680081 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-304941 ssh -n multinode-304941-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ ssh     │ multinode-304941 ssh -n multinode-304941 sudo cat /home/docker/cp-test_multinode-304941-m03_multinode-304941.txt                                          │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ cp      │ multinode-304941 cp multinode-304941-m03:/home/docker/cp-test.txt multinode-304941-m02:/home/docker/cp-test_multinode-304941-m03_multinode-304941-m02.txt │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ ssh     │ multinode-304941 ssh -n multinode-304941-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ ssh     │ multinode-304941 ssh -n multinode-304941-m02 sudo cat /home/docker/cp-test_multinode-304941-m03_multinode-304941-m02.txt                                  │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ node    │ multinode-304941 node stop m03                                                                                                                            │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ node    │ multinode-304941 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:24 UTC │
	│ node    │ list -p multinode-304941                                                                                                                                  │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:24 UTC │                     │
	│ stop    │ -p multinode-304941                                                                                                                                       │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:24 UTC │ 24 Nov 25 09:26 UTC │
	│ start   │ -p multinode-304941 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:26 UTC │ 24 Nov 25 09:29 UTC │
	│ node    │ list -p multinode-304941                                                                                                                                  │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │                     │
	│ node    │ multinode-304941 node delete m03                                                                                                                          │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:29 UTC │
	│ stop    │ multinode-304941 stop                                                                                                                                     │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:29 UTC │ 24 Nov 25 09:31 UTC │
	│ start   │ -p multinode-304941 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:31 UTC │ 24 Nov 25 09:33 UTC │
	│ node    │ list -p multinode-304941                                                                                                                                  │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │                     │
	│ start   │ -p multinode-304941-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-304941-m02 │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │                     │
	│ start   │ -p multinode-304941-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-304941-m03 │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │ 24 Nov 25 09:33 UTC │
	│ node    │ add -p multinode-304941                                                                                                                                   │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │                     │
	│ delete  │ -p multinode-304941-m03                                                                                                                                   │ multinode-304941-m03 │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │ 24 Nov 25 09:33 UTC │
	│ delete  │ -p multinode-304941                                                                                                                                       │ multinode-304941     │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │ 24 Nov 25 09:33 UTC │
	│ start   │ -p test-preload-680081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-680081  │ jenkins │ v1.37.0 │ 24 Nov 25 09:33 UTC │ 24 Nov 25 09:35 UTC │
	│ image   │ test-preload-680081 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-680081  │ jenkins │ v1.37.0 │ 24 Nov 25 09:35 UTC │ 24 Nov 25 09:35 UTC │
	│ stop    │ -p test-preload-680081                                                                                                                                    │ test-preload-680081  │ jenkins │ v1.37.0 │ 24 Nov 25 09:35 UTC │ 24 Nov 25 09:35 UTC │
	│ start   │ -p test-preload-680081 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-680081  │ jenkins │ v1.37.0 │ 24 Nov 25 09:35 UTC │ 24 Nov 25 09:36 UTC │
	│ image   │ test-preload-680081 image list                                                                                                                            │ test-preload-680081  │ jenkins │ v1.37.0 │ 24 Nov 25 09:36 UTC │ 24 Nov 25 09:36 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:35:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:35:40.599664   39174 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:35:40.599911   39174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:35:40.599920   39174 out.go:374] Setting ErrFile to fd 2...
	I1124 09:35:40.599924   39174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:35:40.600107   39174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:35:40.600566   39174 out.go:368] Setting JSON to false
	I1124 09:35:40.601415   39174 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4677,"bootTime":1763972264,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:35:40.601471   39174 start.go:143] virtualization: kvm guest
	I1124 09:35:40.603530   39174 out.go:179] * [test-preload-680081] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:35:40.604916   39174 notify.go:221] Checking for updates...
	I1124 09:35:40.604947   39174 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:35:40.606124   39174 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:35:40.607249   39174 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:35:40.608322   39174 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 09:35:40.609370   39174 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:35:40.610582   39174 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:35:40.611982   39174 config.go:182] Loaded profile config "test-preload-680081": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 09:35:40.613611   39174 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1124 09:35:40.614640   39174 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:35:40.648796   39174 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 09:35:40.649802   39174 start.go:309] selected driver: kvm2
	I1124 09:35:40.649827   39174 start.go:927] validating driver "kvm2" against &{Name:test-preload-680081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-680081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:35:40.649956   39174 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:35:40.650874   39174 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:35:40.650910   39174 cni.go:84] Creating CNI manager for ""
	I1124 09:35:40.650987   39174 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:35:40.651057   39174 start.go:353] cluster config:
	{Name:test-preload-680081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-680081 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:35:40.651202   39174 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:35:40.653703   39174 out.go:179] * Starting "test-preload-680081" primary control-plane node in "test-preload-680081" cluster
	I1124 09:35:40.654971   39174 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 09:35:40.803448   39174 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1124 09:35:40.803484   39174 cache.go:65] Caching tarball of preloaded images
	I1124 09:35:40.803674   39174 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 09:35:40.805565   39174 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1124 09:35:40.806857   39174 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 09:35:41.279181   39174 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1124 09:35:41.279244   39174 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1124 09:35:51.161437   39174 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1124 09:35:51.161575   39174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/config.json ...
	I1124 09:35:51.161801   39174 start.go:360] acquireMachinesLock for test-preload-680081: {Name:mk7b5988e566cc8ac324d849b09ff116b4f24553 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 09:35:51.161867   39174 start.go:364] duration metric: took 43.917µs to acquireMachinesLock for "test-preload-680081"
	I1124 09:35:51.161883   39174 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:35:51.161889   39174 fix.go:54] fixHost starting: 
	I1124 09:35:51.163873   39174 fix.go:112] recreateIfNeeded on test-preload-680081: state=Stopped err=<nil>
	W1124 09:35:51.163898   39174 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:35:51.165696   39174 out.go:252] * Restarting existing kvm2 VM for "test-preload-680081" ...
	I1124 09:35:51.165745   39174 main.go:143] libmachine: starting domain...
	I1124 09:35:51.165756   39174 main.go:143] libmachine: ensuring networks are active...
	I1124 09:35:51.166480   39174 main.go:143] libmachine: Ensuring network default is active
	I1124 09:35:51.166796   39174 main.go:143] libmachine: Ensuring network mk-test-preload-680081 is active
	I1124 09:35:51.167176   39174 main.go:143] libmachine: getting domain XML...
	I1124 09:35:51.168220   39174 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-680081</name>
	  <uuid>39c91d2a-aeb8-4bcc-86a5-0b1dec949781</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/test-preload-680081.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:86:45:b4'/>
	      <source network='mk-test-preload-680081'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:00:62:bf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 09:35:52.426244   39174 main.go:143] libmachine: waiting for domain to start...
	I1124 09:35:52.427706   39174 main.go:143] libmachine: domain is now running
	I1124 09:35:52.427729   39174 main.go:143] libmachine: waiting for IP...
	I1124 09:35:52.428641   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:35:52.429348   39174 main.go:143] libmachine: domain test-preload-680081 has current primary IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:35:52.429367   39174 main.go:143] libmachine: found domain IP: 192.168.39.97
	I1124 09:35:52.429391   39174 main.go:143] libmachine: reserving static IP address...
	I1124 09:35:52.429853   39174 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-680081", mac: "52:54:00:86:45:b4", ip: "192.168.39.97"} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:34:14 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:35:52.429888   39174 main.go:143] libmachine: skip adding static IP to network mk-test-preload-680081 - found existing host DHCP lease matching {name: "test-preload-680081", mac: "52:54:00:86:45:b4", ip: "192.168.39.97"}
	I1124 09:35:52.429905   39174 main.go:143] libmachine: reserved static IP address 192.168.39.97 for domain test-preload-680081
	I1124 09:35:52.429929   39174 main.go:143] libmachine: waiting for SSH...
	I1124 09:35:52.429943   39174 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 09:35:52.432473   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:35:52.432898   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:34:14 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:35:52.432939   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:35:52.433146   39174 main.go:143] libmachine: Using SSH client type: native
	I1124 09:35:52.433492   39174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1124 09:35:52.433508   39174 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 09:35:55.512434   39174 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.97:22: connect: no route to host
	I1124 09:36:01.592544   39174 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.97:22: connect: no route to host
	I1124 09:36:04.710000   39174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:36:04.713245   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.713639   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:04.713669   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.713876   39174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/config.json ...
	I1124 09:36:04.714079   39174 machine.go:94] provisionDockerMachine start ...
	I1124 09:36:04.716679   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.717765   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:04.717802   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.717958   39174 main.go:143] libmachine: Using SSH client type: native
	I1124 09:36:04.718177   39174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1124 09:36:04.718190   39174 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:36:04.829827   39174 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 09:36:04.829861   39174 buildroot.go:166] provisioning hostname "test-preload-680081"
	I1124 09:36:04.833509   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.833961   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:04.833988   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.834147   39174 main.go:143] libmachine: Using SSH client type: native
	I1124 09:36:04.834368   39174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1124 09:36:04.834382   39174 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-680081 && echo "test-preload-680081" | sudo tee /etc/hostname
	I1124 09:36:04.960943   39174 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-680081
	
	I1124 09:36:04.963729   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.964197   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:04.964221   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:04.964403   39174 main.go:143] libmachine: Using SSH client type: native
	I1124 09:36:04.964649   39174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1124 09:36:04.964669   39174 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-680081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-680081/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-680081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:36:05.085064   39174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:36:05.085104   39174 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 09:36:05.085155   39174 buildroot.go:174] setting up certificates
	I1124 09:36:05.085203   39174 provision.go:84] configureAuth start
	I1124 09:36:05.088229   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.088609   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.088634   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.090830   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.091140   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.091170   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.091280   39174 provision.go:143] copyHostCerts
	I1124 09:36:05.091323   39174 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem, removing ...
	I1124 09:36:05.091338   39174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem
	I1124 09:36:05.091412   39174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 09:36:05.091526   39174 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem, removing ...
	I1124 09:36:05.091537   39174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem
	I1124 09:36:05.091565   39174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 09:36:05.091620   39174 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem, removing ...
	I1124 09:36:05.091627   39174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem
	I1124 09:36:05.091649   39174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 09:36:05.091696   39174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.test-preload-680081 san=[127.0.0.1 192.168.39.97 localhost minikube test-preload-680081]
	I1124 09:36:05.109233   39174 provision.go:177] copyRemoteCerts
	I1124 09:36:05.109304   39174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:36:05.111902   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.112353   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.112376   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.112503   39174 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/id_rsa Username:docker}
	I1124 09:36:05.199919   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:36:05.230567   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:36:05.258974   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 09:36:05.286970   39174 provision.go:87] duration metric: took 201.754184ms to configureAuth
	I1124 09:36:05.286995   39174 buildroot.go:189] setting minikube options for container-runtime
	I1124 09:36:05.287179   39174 config.go:182] Loaded profile config "test-preload-680081": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 09:36:05.289653   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.289999   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.290025   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.290198   39174 main.go:143] libmachine: Using SSH client type: native
	I1124 09:36:05.290375   39174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1124 09:36:05.290389   39174 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:36:05.535385   39174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:36:05.535427   39174 machine.go:97] duration metric: took 821.333311ms to provisionDockerMachine
	I1124 09:36:05.535445   39174 start.go:293] postStartSetup for "test-preload-680081" (driver="kvm2")
	I1124 09:36:05.535459   39174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:36:05.535554   39174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:36:05.538565   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.539033   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.539067   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.539239   39174 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/id_rsa Username:docker}
	I1124 09:36:05.628863   39174 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:36:05.633519   39174 info.go:137] Remote host: Buildroot 2025.02
	I1124 09:36:05.633547   39174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 09:36:05.633621   39174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 09:36:05.633726   39174 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem -> 96292.pem in /etc/ssl/certs
	I1124 09:36:05.633856   39174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:36:05.645479   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:36:05.676457   39174 start.go:296] duration metric: took 140.99826ms for postStartSetup
	I1124 09:36:05.676503   39174 fix.go:56] duration metric: took 14.514613688s for fixHost
	I1124 09:36:05.679059   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.679546   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.679571   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.679750   39174 main.go:143] libmachine: Using SSH client type: native
	I1124 09:36:05.679947   39174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1124 09:36:05.679956   39174 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 09:36:05.793437   39174 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763976965.748199274
	
	I1124 09:36:05.793463   39174 fix.go:216] guest clock: 1763976965.748199274
	I1124 09:36:05.793473   39174 fix.go:229] Guest: 2025-11-24 09:36:05.748199274 +0000 UTC Remote: 2025-11-24 09:36:05.676508525 +0000 UTC m=+25.125177246 (delta=71.690749ms)
	I1124 09:36:05.793488   39174 fix.go:200] guest clock delta is within tolerance: 71.690749ms
	I1124 09:36:05.793494   39174 start.go:83] releasing machines lock for "test-preload-680081", held for 14.631616468s
	I1124 09:36:05.796346   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.796729   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.796752   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.797402   39174 ssh_runner.go:195] Run: cat /version.json
	I1124 09:36:05.797510   39174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:36:05.800318   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.800429   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.800807   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.800837   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.800917   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:05.800960   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:05.801054   39174 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/id_rsa Username:docker}
	I1124 09:36:05.801263   39174 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/id_rsa Username:docker}
	I1124 09:36:05.881453   39174 ssh_runner.go:195] Run: systemctl --version
	I1124 09:36:05.917435   39174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:36:06.068121   39174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:36:06.074779   39174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:36:06.074845   39174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:36:06.094930   39174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:36:06.094953   39174 start.go:496] detecting cgroup driver to use...
	I1124 09:36:06.095016   39174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:36:06.114231   39174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:36:06.131743   39174 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:36:06.131808   39174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:36:06.149264   39174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:36:06.165702   39174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:36:06.311609   39174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:36:06.526490   39174 docker.go:234] disabling docker service ...
	I1124 09:36:06.526556   39174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:36:06.543137   39174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:36:06.557520   39174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:36:06.710333   39174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:36:06.846883   39174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:36:06.862248   39174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:36:06.883647   39174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1124 09:36:06.883705   39174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.895600   39174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:36:06.895657   39174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.907501   39174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.919140   39174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.931260   39174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:36:06.943420   39174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.955624   39174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.975993   39174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:36:06.987988   39174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:36:06.997869   39174 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 09:36:06.997924   39174 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 09:36:07.016986   39174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:36:07.028276   39174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:36:07.167969   39174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:36:07.274665   39174 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:36:07.274756   39174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:36:07.279812   39174 start.go:564] Will wait 60s for crictl version
	I1124 09:36:07.279858   39174 ssh_runner.go:195] Run: which crictl
	I1124 09:36:07.283886   39174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 09:36:07.318012   39174 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 09:36:07.318084   39174 ssh_runner.go:195] Run: crio --version
	I1124 09:36:07.346378   39174 ssh_runner.go:195] Run: crio --version
	I1124 09:36:07.375547   39174 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1124 09:36:07.379209   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:07.379607   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:07.379625   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:07.379844   39174 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 09:36:07.383823   39174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:36:07.398039   39174 kubeadm.go:884] updating cluster {Name:test-preload-680081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-680081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:36:07.398142   39174 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 09:36:07.398210   39174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:36:07.430520   39174 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1124 09:36:07.430588   39174 ssh_runner.go:195] Run: which lz4
	I1124 09:36:07.434605   39174 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 09:36:07.439199   39174 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 09:36:07.439240   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1124 09:36:08.837359   39174 crio.go:462] duration metric: took 1.402776592s to copy over tarball
	I1124 09:36:08.837451   39174 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 09:36:10.598416   39174 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.760924641s)
	I1124 09:36:10.598451   39174 crio.go:469] duration metric: took 1.761056017s to extract the tarball
	I1124 09:36:10.598458   39174 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 09:36:10.639660   39174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:36:10.683114   39174 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:36:10.683138   39174 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:36:10.683145   39174 kubeadm.go:935] updating node { 192.168.39.97 8443 v1.32.0 crio true true} ...
	I1124 09:36:10.683245   39174 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-680081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-680081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:36:10.683308   39174 ssh_runner.go:195] Run: crio config
	I1124 09:36:10.729520   39174 cni.go:84] Creating CNI manager for ""
	I1124 09:36:10.729547   39174 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:36:10.729565   39174 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:36:10.729585   39174 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-680081 NodeName:test-preload-680081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:36:10.729685   39174 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-680081"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.97"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:36:10.729748   39174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1124 09:36:10.741673   39174 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:36:10.741738   39174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:36:10.752850   39174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1124 09:36:10.772712   39174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:36:10.792219   39174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1124 09:36:10.812354   39174 ssh_runner.go:195] Run: grep 192.168.39.97	control-plane.minikube.internal$ /etc/hosts
	I1124 09:36:10.816305   39174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:36:10.830488   39174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:36:10.968349   39174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:36:10.998598   39174 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081 for IP: 192.168.39.97
	I1124 09:36:10.998626   39174 certs.go:195] generating shared ca certs ...
	I1124 09:36:10.998647   39174 certs.go:227] acquiring lock for ca certs: {Name:mkc847d4fb6fb61872e24a1bb00356ff9ef1a409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:36:10.998825   39174 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key
	I1124 09:36:10.998892   39174 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key
	I1124 09:36:10.998907   39174 certs.go:257] generating profile certs ...
	I1124 09:36:10.998996   39174 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.key
	I1124 09:36:10.999077   39174 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/apiserver.key.49d48f6b
	I1124 09:36:10.999135   39174 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/proxy-client.key
	I1124 09:36:10.999280   39174 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem (1338 bytes)
	W1124 09:36:10.999336   39174 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629_empty.pem, impossibly tiny 0 bytes
	I1124 09:36:10.999351   39174 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:36:10.999397   39174 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:36:10.999439   39174 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:36:10.999475   39174 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem (1675 bytes)
	I1124 09:36:10.999547   39174 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:36:11.000123   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:36:11.030405   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:36:11.062525   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:36:11.092604   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:36:11.121368   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:36:11.150145   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:36:11.179271   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:36:11.208891   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:36:11.238477   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /usr/share/ca-certificates/96292.pem (1708 bytes)
	I1124 09:36:11.268982   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:36:11.298329   39174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem --> /usr/share/ca-certificates/9629.pem (1338 bytes)
	I1124 09:36:11.327388   39174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:36:11.347907   39174 ssh_runner.go:195] Run: openssl version
	I1124 09:36:11.354432   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96292.pem && ln -fs /usr/share/ca-certificates/96292.pem /etc/ssl/certs/96292.pem"
	I1124 09:36:11.367824   39174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96292.pem
	I1124 09:36:11.373149   39174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:42 /usr/share/ca-certificates/96292.pem
	I1124 09:36:11.373236   39174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96292.pem
	I1124 09:36:11.380471   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:36:11.393982   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:36:11.407065   39174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:36:11.412208   39174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:36:11.412258   39174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:36:11.419273   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:36:11.432114   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9629.pem && ln -fs /usr/share/ca-certificates/9629.pem /etc/ssl/certs/9629.pem"
	I1124 09:36:11.444914   39174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9629.pem
	I1124 09:36:11.449806   39174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:42 /usr/share/ca-certificates/9629.pem
	I1124 09:36:11.449865   39174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9629.pem
	I1124 09:36:11.457043   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9629.pem /etc/ssl/certs/51391683.0"
	I1124 09:36:11.470048   39174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:36:11.475094   39174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:36:11.482257   39174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:36:11.489341   39174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:36:11.496933   39174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:36:11.504300   39174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:36:11.511538   39174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:36:11.519092   39174 kubeadm.go:401] StartCluster: {Name:test-preload-680081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-680081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:36:11.519208   39174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:36:11.519257   39174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:36:11.551308   39174 cri.go:89] found id: ""
	I1124 09:36:11.551383   39174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:36:11.563660   39174 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:36:11.563688   39174 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:36:11.563740   39174 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:36:11.575352   39174 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:36:11.575736   39174 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-680081" does not appear in /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:36:11.575859   39174 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5665/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-680081" cluster setting kubeconfig missing "test-preload-680081" context setting]
	I1124 09:36:11.576095   39174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:36:11.576581   39174 kapi.go:59] client config for test-preload-680081: &rest.Config{Host:"https://192.168.39.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:36:11.576983   39174 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 09:36:11.577002   39174 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 09:36:11.577010   39174 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 09:36:11.577016   39174 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 09:36:11.577022   39174 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 09:36:11.577329   39174 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:36:11.588620   39174 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.97
	I1124 09:36:11.588647   39174 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:36:11.588657   39174 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:36:11.588697   39174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:36:11.622953   39174 cri.go:89] found id: ""
	I1124 09:36:11.623022   39174 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:36:11.641646   39174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:36:11.654171   39174 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:36:11.654195   39174 kubeadm.go:158] found existing configuration files:
	
	I1124 09:36:11.654239   39174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:36:11.665426   39174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:36:11.665511   39174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:36:11.677825   39174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:36:11.688742   39174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:36:11.688808   39174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:36:11.701000   39174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:36:11.712138   39174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:36:11.712225   39174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:36:11.723867   39174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:36:11.734362   39174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:36:11.734424   39174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:36:11.745716   39174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:36:11.757021   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:36:11.811782   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:36:13.034188   39174 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.222353975s)
	I1124 09:36:13.034266   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:36:13.281616   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:36:13.347826   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:36:13.430398   39174 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:36:13.430495   39174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:36:13.931611   39174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:36:14.430961   39174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:36:14.930754   39174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:36:15.430982   39174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:36:15.463143   39174 api_server.go:72] duration metric: took 2.032759237s to wait for apiserver process to appear ...
	I1124 09:36:15.463183   39174 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:36:15.463204   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:17.784908   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:36:17.784944   39174 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:36:17.784965   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:17.861799   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:36:17.861852   39174 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:36:17.964270   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:17.970632   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:36:17.970660   39174 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:36:18.463325   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:18.467586   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:36:18.467618   39174 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:36:18.963283   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:18.972598   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:36:18.972627   39174 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:36:19.463351   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:19.468930   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I1124 09:36:19.478716   39174 api_server.go:141] control plane version: v1.32.0
	I1124 09:36:19.478745   39174 api_server.go:131] duration metric: took 4.015555193s to wait for apiserver health ...
	I1124 09:36:19.478754   39174 cni.go:84] Creating CNI manager for ""
	I1124 09:36:19.478759   39174 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:36:19.480651   39174 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:36:19.481854   39174 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:36:19.500917   39174 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:36:19.524895   39174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:36:19.529964   39174 system_pods.go:59] 7 kube-system pods found
	I1124 09:36:19.530010   39174 system_pods.go:61] "coredns-668d6bf9bc-6mbhd" [9df56161-50bd-4de0-97ed-3624b1945f49] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:36:19.530020   39174 system_pods.go:61] "etcd-test-preload-680081" [61047044-fb95-4558-8a5f-d3fb617bcb02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:36:19.530029   39174 system_pods.go:61] "kube-apiserver-test-preload-680081" [624a6f43-5c7a-4d67-b95a-aab523f8cde5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:36:19.530038   39174 system_pods.go:61] "kube-controller-manager-test-preload-680081" [a3447405-c167-4016-be8b-27226d3a4a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:36:19.530044   39174 system_pods.go:61] "kube-proxy-nkbcj" [d54c7e21-bddf-4288-9ebe-86cef0aff52e] Running
	I1124 09:36:19.530053   39174 system_pods.go:61] "kube-scheduler-test-preload-680081" [e4aacb02-a464-447e-9eae-94d5e6273ba3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:36:19.530058   39174 system_pods.go:61] "storage-provisioner" [4556911e-0f91-4db3-b3a7-8ac95ef42d23] Running
	I1124 09:36:19.530066   39174 system_pods.go:74] duration metric: took 5.144241ms to wait for pod list to return data ...
	I1124 09:36:19.530075   39174 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:36:19.534039   39174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:36:19.534067   39174 node_conditions.go:123] node cpu capacity is 2
	I1124 09:36:19.534079   39174 node_conditions.go:105] duration metric: took 4.000125ms to run NodePressure ...
	I1124 09:36:19.534135   39174 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:36:19.795294   39174 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 09:36:19.802413   39174 kubeadm.go:744] kubelet initialised
	I1124 09:36:19.802438   39174 kubeadm.go:745] duration metric: took 7.111443ms waiting for restarted kubelet to initialise ...
	I1124 09:36:19.802452   39174 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:36:19.827727   39174 ops.go:34] apiserver oom_adj: -16
	I1124 09:36:19.827754   39174 kubeadm.go:602] duration metric: took 8.264059456s to restartPrimaryControlPlane
	I1124 09:36:19.827764   39174 kubeadm.go:403] duration metric: took 8.308681336s to StartCluster
	I1124 09:36:19.827782   39174 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:36:19.827984   39174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:36:19.828712   39174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:36:19.828989   39174 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:36:19.829078   39174 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:36:19.829192   39174 addons.go:70] Setting storage-provisioner=true in profile "test-preload-680081"
	I1124 09:36:19.829221   39174 addons.go:239] Setting addon storage-provisioner=true in "test-preload-680081"
	I1124 09:36:19.829224   39174 addons.go:70] Setting default-storageclass=true in profile "test-preload-680081"
	I1124 09:36:19.829256   39174 config.go:182] Loaded profile config "test-preload-680081": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 09:36:19.829260   39174 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-680081"
	W1124 09:36:19.829233   39174 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:36:19.829459   39174 host.go:66] Checking if "test-preload-680081" exists ...
	I1124 09:36:19.831446   39174 out.go:179] * Verifying Kubernetes components...
	I1124 09:36:19.831909   39174 kapi.go:59] client config for test-preload-680081: &rest.Config{Host:"https://192.168.39.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:36:19.832188   39174 addons.go:239] Setting addon default-storageclass=true in "test-preload-680081"
	W1124 09:36:19.832201   39174 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:36:19.832222   39174 host.go:66] Checking if "test-preload-680081" exists ...
	I1124 09:36:19.832816   39174 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:36:19.832874   39174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:36:19.833971   39174 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:36:19.833984   39174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:36:19.834295   39174 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:36:19.834311   39174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:36:19.837198   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:19.837471   39174 main.go:143] libmachine: domain test-preload-680081 has defined MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:19.837631   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:19.837663   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:19.837812   39174 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/id_rsa Username:docker}
	I1124 09:36:19.838027   39174 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:45:b4", ip: ""} in network mk-test-preload-680081: {Iface:virbr1 ExpiryTime:2025-11-24 10:36:02 +0000 UTC Type:0 Mac:52:54:00:86:45:b4 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:test-preload-680081 Clientid:01:52:54:00:86:45:b4}
	I1124 09:36:19.838060   39174 main.go:143] libmachine: domain test-preload-680081 has defined IP address 192.168.39.97 and MAC address 52:54:00:86:45:b4 in network mk-test-preload-680081
	I1124 09:36:19.838258   39174 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/test-preload-680081/id_rsa Username:docker}
	I1124 09:36:20.064636   39174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:36:20.083555   39174 node_ready.go:35] waiting up to 6m0s for node "test-preload-680081" to be "Ready" ...
	I1124 09:36:20.086299   39174 node_ready.go:49] node "test-preload-680081" is "Ready"
	I1124 09:36:20.086334   39174 node_ready.go:38] duration metric: took 2.740187ms for node "test-preload-680081" to be "Ready" ...
	I1124 09:36:20.086349   39174 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:36:20.086398   39174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:36:20.111210   39174 api_server.go:72] duration metric: took 282.184787ms to wait for apiserver process to appear ...
	I1124 09:36:20.111246   39174 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:36:20.111269   39174 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I1124 09:36:20.117026   39174 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I1124 09:36:20.118216   39174 api_server.go:141] control plane version: v1.32.0
	I1124 09:36:20.118241   39174 api_server.go:131] duration metric: took 6.98798ms to wait for apiserver health ...
	I1124 09:36:20.118252   39174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:36:20.121456   39174 system_pods.go:59] 7 kube-system pods found
	I1124 09:36:20.121484   39174 system_pods.go:61] "coredns-668d6bf9bc-6mbhd" [9df56161-50bd-4de0-97ed-3624b1945f49] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:36:20.121491   39174 system_pods.go:61] "etcd-test-preload-680081" [61047044-fb95-4558-8a5f-d3fb617bcb02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:36:20.121500   39174 system_pods.go:61] "kube-apiserver-test-preload-680081" [624a6f43-5c7a-4d67-b95a-aab523f8cde5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:36:20.121510   39174 system_pods.go:61] "kube-controller-manager-test-preload-680081" [a3447405-c167-4016-be8b-27226d3a4a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:36:20.121516   39174 system_pods.go:61] "kube-proxy-nkbcj" [d54c7e21-bddf-4288-9ebe-86cef0aff52e] Running
	I1124 09:36:20.121521   39174 system_pods.go:61] "kube-scheduler-test-preload-680081" [e4aacb02-a464-447e-9eae-94d5e6273ba3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:36:20.121525   39174 system_pods.go:61] "storage-provisioner" [4556911e-0f91-4db3-b3a7-8ac95ef42d23] Running
	I1124 09:36:20.121532   39174 system_pods.go:74] duration metric: took 3.273892ms to wait for pod list to return data ...
	I1124 09:36:20.121539   39174 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:36:20.123738   39174 default_sa.go:45] found service account: "default"
	I1124 09:36:20.123755   39174 default_sa.go:55] duration metric: took 2.211774ms for default service account to be created ...
	I1124 09:36:20.123762   39174 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:36:20.127905   39174 system_pods.go:86] 7 kube-system pods found
	I1124 09:36:20.127928   39174 system_pods.go:89] "coredns-668d6bf9bc-6mbhd" [9df56161-50bd-4de0-97ed-3624b1945f49] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:36:20.127935   39174 system_pods.go:89] "etcd-test-preload-680081" [61047044-fb95-4558-8a5f-d3fb617bcb02] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:36:20.127942   39174 system_pods.go:89] "kube-apiserver-test-preload-680081" [624a6f43-5c7a-4d67-b95a-aab523f8cde5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:36:20.127951   39174 system_pods.go:89] "kube-controller-manager-test-preload-680081" [a3447405-c167-4016-be8b-27226d3a4a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:36:20.127961   39174 system_pods.go:89] "kube-proxy-nkbcj" [d54c7e21-bddf-4288-9ebe-86cef0aff52e] Running
	I1124 09:36:20.127974   39174 system_pods.go:89] "kube-scheduler-test-preload-680081" [e4aacb02-a464-447e-9eae-94d5e6273ba3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:36:20.127980   39174 system_pods.go:89] "storage-provisioner" [4556911e-0f91-4db3-b3a7-8ac95ef42d23] Running
	I1124 09:36:20.127988   39174 system_pods.go:126] duration metric: took 4.220335ms to wait for k8s-apps to be running ...
	I1124 09:36:20.128000   39174 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:36:20.128046   39174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:36:20.148017   39174 system_svc.go:56] duration metric: took 20.010164ms WaitForService to wait for kubelet
	I1124 09:36:20.148043   39174 kubeadm.go:587] duration metric: took 319.024121ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:36:20.148065   39174 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:36:20.151255   39174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:36:20.151276   39174 node_conditions.go:123] node cpu capacity is 2
	I1124 09:36:20.151286   39174 node_conditions.go:105] duration metric: took 3.217239ms to run NodePressure ...
	I1124 09:36:20.151297   39174 start.go:242] waiting for startup goroutines ...
	I1124 09:36:20.212761   39174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:36:20.253698   39174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:36:20.922208   39174 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 09:36:20.923871   39174 addons.go:530] duration metric: took 1.09480271s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 09:36:20.923929   39174 start.go:247] waiting for cluster config update ...
	I1124 09:36:20.923945   39174 start.go:256] writing updated cluster config ...
	I1124 09:36:20.924229   39174 ssh_runner.go:195] Run: rm -f paused
	I1124 09:36:20.929775   39174 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:36:20.930330   39174 kapi.go:59] client config for test-preload-680081: &rest.Config{Host:"https://192.168.39.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/test-preload-680081/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:36:20.933897   39174 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-6mbhd" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:36:22.940213   39174 pod_ready.go:104] pod "coredns-668d6bf9bc-6mbhd" is not "Ready", error: <nil>
	W1124 09:36:24.942311   39174 pod_ready.go:104] pod "coredns-668d6bf9bc-6mbhd" is not "Ready", error: <nil>
	I1124 09:36:27.440539   39174 pod_ready.go:94] pod "coredns-668d6bf9bc-6mbhd" is "Ready"
	I1124 09:36:27.440568   39174 pod_ready.go:86] duration metric: took 6.506644452s for pod "coredns-668d6bf9bc-6mbhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:27.443651   39174 pod_ready.go:83] waiting for pod "etcd-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:28.949994   39174 pod_ready.go:94] pod "etcd-test-preload-680081" is "Ready"
	I1124 09:36:28.950033   39174 pod_ready.go:86] duration metric: took 1.50633933s for pod "etcd-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:28.952830   39174 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:28.960093   39174 pod_ready.go:94] pod "kube-apiserver-test-preload-680081" is "Ready"
	I1124 09:36:28.960124   39174 pod_ready.go:86] duration metric: took 7.26873ms for pod "kube-apiserver-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:28.962239   39174 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:28.965911   39174 pod_ready.go:94] pod "kube-controller-manager-test-preload-680081" is "Ready"
	I1124 09:36:28.965934   39174 pod_ready.go:86] duration metric: took 3.676158ms for pod "kube-controller-manager-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:29.038141   39174 pod_ready.go:83] waiting for pod "kube-proxy-nkbcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:29.437829   39174 pod_ready.go:94] pod "kube-proxy-nkbcj" is "Ready"
	I1124 09:36:29.437867   39174 pod_ready.go:86] duration metric: took 399.679452ms for pod "kube-proxy-nkbcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:29.638219   39174 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:30.038692   39174 pod_ready.go:94] pod "kube-scheduler-test-preload-680081" is "Ready"
	I1124 09:36:30.038773   39174 pod_ready.go:86] duration metric: took 400.525919ms for pod "kube-scheduler-test-preload-680081" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:36:30.038788   39174 pod_ready.go:40] duration metric: took 9.108975864s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:36:30.084928   39174 start.go:625] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1124 09:36:30.086371   39174 out.go:203] 
	W1124 09:36:30.087508   39174 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1124 09:36:30.088631   39174 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:36:30.089770   39174 out.go:179] * Done! kubectl is now configured to use "test-preload-680081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.851536005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763976990851509883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41bf7760-ea9a-4a6e-8216-5d74ff392e99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.852751996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a09a0096-fa06-42b2-99be-ee2e60bbae99 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.852803849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a09a0096-fa06-42b2-99be-ee2e60bbae99 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.852977331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bcf206a5b71a67482d9ab57887b64938f4e80b715d996847257b6cef277a3b1f,PodSandboxId:c6418b1d4810e0b12a6417e9cee9f119e28506d27e95849568cfcb39706020fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763976982255456048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6mbhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df56161-50bd-4de0-97ed-3624b1945f49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee7cba8dd9479862553b720517a7e1a0aa594bf8a291cf4ee9e8f74fdd2ef5d,PodSandboxId:3b8e6e00bd622afe7674080ea0819c6c4e4378594d1f5619b547580df5e0c362,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763976978810335995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4556911e-0f91-4db3-b3a7-8ac95ef42d23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d25e79138509f6346749e8251b478fb90668b1050cdbd7a2305aa1b27cd6fc8f,PodSandboxId:5070b9742ba1abe3e141aec3b56dbdc45e920e1d608f6980d82e6a3aeb494686,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763976978828088044,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkbcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
4c7e21-bddf-4288-9ebe-86cef0aff52e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133f0b8590dcc5d6f5b3cec79247fd2628807fc333ebf93ee4a0dfc7158c6e37,PodSandboxId:426189682f0ab05c74dd9cf11c0f1d8733e9a2437332cb8ecb59b47860b3623d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763976974950266906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278f2110deb5080d9606065b2a0cd92b,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49a922d4e38ff2ca9e8dea3e094e6e0d8b61b3be874d54c61974642b7ef5389,PodSandboxId:770ef116ffca8287c1607e5dd60437234fcbbb1f393880968da7d025e7527008,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763976974986364649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4802b899eccf6f0cc8fad7e12380ed6f,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2449e91d2411872d949a2babfcc174c9ab76162dc51637454c829a1ab10e1,PodSandboxId:4fe9506c4263c6dea04d062ed978d0e334fa3f7baeeb5aaa2b5e3c1de14cb1b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763976974964713079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36160764e7ba81570bfb7cfad9e2fb8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bfa44fd30352b8e1e740792d0b152bd76ebdf3e57ec0715d8513d689a5e10e,PodSandboxId:549716c5392737edb7b5c3f04d60d800f61200520a96e15dbec5a823afe5b18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763976974931077990,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f079a817754e3f909c95df4ffde0c537,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a09a0096-fa06-42b2-99be-ee2e60bbae99 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.887247004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=292ff601-8d0b-4961-9efc-74b7dcf435ca name=/runtime.v1.RuntimeService/Version
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.887375043Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=292ff601-8d0b-4961-9efc-74b7dcf435ca name=/runtime.v1.RuntimeService/Version
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.888735410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be556535-2ea7-4790-86d3-5374af144fc7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.889353291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763976990889325216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be556535-2ea7-4790-86d3-5374af144fc7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.890213565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a94b71c6-8406-4827-b455-ed3c20e85c1d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.890326567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a94b71c6-8406-4827-b455-ed3c20e85c1d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.890513248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bcf206a5b71a67482d9ab57887b64938f4e80b715d996847257b6cef277a3b1f,PodSandboxId:c6418b1d4810e0b12a6417e9cee9f119e28506d27e95849568cfcb39706020fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763976982255456048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6mbhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df56161-50bd-4de0-97ed-3624b1945f49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee7cba8dd9479862553b720517a7e1a0aa594bf8a291cf4ee9e8f74fdd2ef5d,PodSandboxId:3b8e6e00bd622afe7674080ea0819c6c4e4378594d1f5619b547580df5e0c362,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763976978810335995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4556911e-0f91-4db3-b3a7-8ac95ef42d23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d25e79138509f6346749e8251b478fb90668b1050cdbd7a2305aa1b27cd6fc8f,PodSandboxId:5070b9742ba1abe3e141aec3b56dbdc45e920e1d608f6980d82e6a3aeb494686,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763976978828088044,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkbcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
4c7e21-bddf-4288-9ebe-86cef0aff52e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133f0b8590dcc5d6f5b3cec79247fd2628807fc333ebf93ee4a0dfc7158c6e37,PodSandboxId:426189682f0ab05c74dd9cf11c0f1d8733e9a2437332cb8ecb59b47860b3623d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763976974950266906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278f2110deb5080d9606065b2a0cd92b,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49a922d4e38ff2ca9e8dea3e094e6e0d8b61b3be874d54c61974642b7ef5389,PodSandboxId:770ef116ffca8287c1607e5dd60437234fcbbb1f393880968da7d025e7527008,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763976974986364649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4802b899eccf6f0cc8fad7e12380ed6f,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2449e91d2411872d949a2babfcc174c9ab76162dc51637454c829a1ab10e1,PodSandboxId:4fe9506c4263c6dea04d062ed978d0e334fa3f7baeeb5aaa2b5e3c1de14cb1b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763976974964713079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36160764e7ba81570bfb7cfad9e2fb8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bfa44fd30352b8e1e740792d0b152bd76ebdf3e57ec0715d8513d689a5e10e,PodSandboxId:549716c5392737edb7b5c3f04d60d800f61200520a96e15dbec5a823afe5b18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763976974931077990,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f079a817754e3f909c95df4ffde0c537,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a94b71c6-8406-4827-b455-ed3c20e85c1d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.924889672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=713cf4c3-3ca2-4eb5-88ca-fb27639723b3 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.924976408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=713cf4c3-3ca2-4eb5-88ca-fb27639723b3 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.926214705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce55346d-2d4c-4597-96e0-2ef24a750612 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.926739564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763976990926714103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce55346d-2d4c-4597-96e0-2ef24a750612 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.927803959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06f42c13-9c44-4f1b-a785-6d283b7a67ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.927863598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06f42c13-9c44-4f1b-a785-6d283b7a67ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.928045789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bcf206a5b71a67482d9ab57887b64938f4e80b715d996847257b6cef277a3b1f,PodSandboxId:c6418b1d4810e0b12a6417e9cee9f119e28506d27e95849568cfcb39706020fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763976982255456048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6mbhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df56161-50bd-4de0-97ed-3624b1945f49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee7cba8dd9479862553b720517a7e1a0aa594bf8a291cf4ee9e8f74fdd2ef5d,PodSandboxId:3b8e6e00bd622afe7674080ea0819c6c4e4378594d1f5619b547580df5e0c362,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763976978810335995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4556911e-0f91-4db3-b3a7-8ac95ef42d23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d25e79138509f6346749e8251b478fb90668b1050cdbd7a2305aa1b27cd6fc8f,PodSandboxId:5070b9742ba1abe3e141aec3b56dbdc45e920e1d608f6980d82e6a3aeb494686,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763976978828088044,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkbcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
4c7e21-bddf-4288-9ebe-86cef0aff52e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133f0b8590dcc5d6f5b3cec79247fd2628807fc333ebf93ee4a0dfc7158c6e37,PodSandboxId:426189682f0ab05c74dd9cf11c0f1d8733e9a2437332cb8ecb59b47860b3623d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763976974950266906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278f2110deb5080d9606065b2a0cd92b,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49a922d4e38ff2ca9e8dea3e094e6e0d8b61b3be874d54c61974642b7ef5389,PodSandboxId:770ef116ffca8287c1607e5dd60437234fcbbb1f393880968da7d025e7527008,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763976974986364649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4802b899eccf6f0cc8fad7e12380ed6f,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2449e91d2411872d949a2babfcc174c9ab76162dc51637454c829a1ab10e1,PodSandboxId:4fe9506c4263c6dea04d062ed978d0e334fa3f7baeeb5aaa2b5e3c1de14cb1b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763976974964713079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36160764e7ba81570bfb7cfad9e2fb8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bfa44fd30352b8e1e740792d0b152bd76ebdf3e57ec0715d8513d689a5e10e,PodSandboxId:549716c5392737edb7b5c3f04d60d800f61200520a96e15dbec5a823afe5b18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763976974931077990,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f079a817754e3f909c95df4ffde0c537,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06f42c13-9c44-4f1b-a785-6d283b7a67ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.957392108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf3352da-262d-46ec-9376-5af6adbe90e8 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.957476679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf3352da-262d-46ec-9376-5af6adbe90e8 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.959370160Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0eba386e-2190-42b5-9ec9-772d73bf6d35 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.960433166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763976990960407513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eba386e-2190-42b5-9ec9-772d73bf6d35 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.961446502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f777577-2938-4207-bb99-3ec93333c0a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.961588135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f777577-2938-4207-bb99-3ec93333c0a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:36:30 test-preload-680081 crio[833]: time="2025-11-24 09:36:30.961759497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bcf206a5b71a67482d9ab57887b64938f4e80b715d996847257b6cef277a3b1f,PodSandboxId:c6418b1d4810e0b12a6417e9cee9f119e28506d27e95849568cfcb39706020fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763976982255456048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-6mbhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df56161-50bd-4de0-97ed-3624b1945f49,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee7cba8dd9479862553b720517a7e1a0aa594bf8a291cf4ee9e8f74fdd2ef5d,PodSandboxId:3b8e6e00bd622afe7674080ea0819c6c4e4378594d1f5619b547580df5e0c362,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763976978810335995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4556911e-0f91-4db3-b3a7-8ac95ef42d23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d25e79138509f6346749e8251b478fb90668b1050cdbd7a2305aa1b27cd6fc8f,PodSandboxId:5070b9742ba1abe3e141aec3b56dbdc45e920e1d608f6980d82e6a3aeb494686,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763976978828088044,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkbcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
4c7e21-bddf-4288-9ebe-86cef0aff52e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133f0b8590dcc5d6f5b3cec79247fd2628807fc333ebf93ee4a0dfc7158c6e37,PodSandboxId:426189682f0ab05c74dd9cf11c0f1d8733e9a2437332cb8ecb59b47860b3623d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763976974950266906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278f2110deb5080d9606065b2a0cd92b,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49a922d4e38ff2ca9e8dea3e094e6e0d8b61b3be874d54c61974642b7ef5389,PodSandboxId:770ef116ffca8287c1607e5dd60437234fcbbb1f393880968da7d025e7527008,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763976974986364649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4802b899eccf6f0cc8fad7e12380ed6f,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2449e91d2411872d949a2babfcc174c9ab76162dc51637454c829a1ab10e1,PodSandboxId:4fe9506c4263c6dea04d062ed978d0e334fa3f7baeeb5aaa2b5e3c1de14cb1b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763976974964713079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36160764e7ba81570bfb7cfad9e2fb8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bfa44fd30352b8e1e740792d0b152bd76ebdf3e57ec0715d8513d689a5e10e,PodSandboxId:549716c5392737edb7b5c3f04d60d800f61200520a96e15dbec5a823afe5b18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763976974931077990,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-680081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f079a817754e3f909c95df4ffde0c537,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f777577-2938-4207-bb99-3ec93333c0a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	bcf206a5b71a6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   c6418b1d4810e       coredns-668d6bf9bc-6mbhd                      kube-system
	d25e79138509f       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   12 seconds ago      Running             kube-proxy                1                   5070b9742ba1a       kube-proxy-nkbcj                              kube-system
	6ee7cba8dd947       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   3b8e6e00bd622       storage-provisioner                           kube-system
	f49a922d4e38f       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   16 seconds ago      Running             kube-scheduler            1                   770ef116ffca8       kube-scheduler-test-preload-680081            kube-system
	e7a2449e91d24       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   16 seconds ago      Running             kube-controller-manager   1                   4fe9506c4263c       kube-controller-manager-test-preload-680081   kube-system
	133f0b8590dcc       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   16 seconds ago      Running             etcd                      1                   426189682f0ab       etcd-test-preload-680081                      kube-system
	f7bfa44fd3035       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   16 seconds ago      Running             kube-apiserver            1                   549716c539273       kube-apiserver-test-preload-680081            kube-system
	
	
	==> coredns [bcf206a5b71a67482d9ab57887b64938f4e80b715d996847257b6cef277a3b1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57153 - 47847 "HINFO IN 5116621067396543298.327351638327135937. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026590123s
	
	
	==> describe nodes <==
	Name:               test-preload-680081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-680081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=test-preload-680081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_34_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:34:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-680081
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:36:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:36:19 +0000   Mon, 24 Nov 2025 09:34:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:36:19 +0000   Mon, 24 Nov 2025 09:34:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:36:19 +0000   Mon, 24 Nov 2025 09:34:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:36:19 +0000   Mon, 24 Nov 2025 09:36:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    test-preload-680081
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 39c91d2aaeb84bcc86a50b1dec949781
	  System UUID:                39c91d2a-aeb8-4bcc-86a5-0b1dec949781
	  Boot ID:                    dfbbdb06-b811-4134-98fa-364619b89e6f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-6mbhd                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     98s
	  kube-system                 etcd-test-preload-680081                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         102s
	  kube-system                 kube-apiserver-test-preload-680081             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-test-preload-680081    200m (10%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-nkbcj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-test-preload-680081             100m (5%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 95s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   Starting                 103s               kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  102s               kubelet          Node test-preload-680081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s               kubelet          Node test-preload-680081 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s               kubelet          Node test-preload-680081 status is now: NodeHasSufficientPID
	  Normal   NodeReady                102s               kubelet          Node test-preload-680081 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  102s               kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           99s                node-controller  Node test-preload-680081 event: Registered Node test-preload-680081 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node test-preload-680081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node test-preload-680081 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node test-preload-680081 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 14s                kubelet          Node test-preload-680081 has been rebooted, boot id: dfbbdb06-b811-4134-98fa-364619b89e6f
	  Normal   RegisteredNode           11s                node-controller  Node test-preload-680081 event: Registered Node test-preload-680081 in Controller
	
	
	==> dmesg <==
	[Nov24 09:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001513] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Nov24 09:36] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.040348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.102029] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.568947] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.340459] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [133f0b8590dcc5d6f5b3cec79247fd2628807fc333ebf93ee4a0dfc7158c6e37] <==
	{"level":"info","ts":"2025-11-24T09:36:15.356371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2025-11-24T09:36:15.356457Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2025-11-24T09:36:15.356604Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:36:15.356643Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:36:15.357815Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T09:36:15.361315Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T09:36:15.362332Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T09:36:15.362492Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2025-11-24T09:36:15.362518Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2025-11-24T09:36:16.424102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T09:36:16.424206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T09:36:16.424250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 2"}
	{"level":"info","ts":"2025-11-24T09:36:16.424318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T09:36:16.424351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2025-11-24T09:36:16.424371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T09:36:16.424397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2025-11-24T09:36:16.425928Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:test-preload-680081 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T09:36:16.426321Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:36:16.428172Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:36:16.428208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:36:16.426381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T09:36:16.429053Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:36:16.429607Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:36:16.435795Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.97:2379"}
	{"level":"info","ts":"2025-11-24T09:36:16.429792Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:36:31 up 0 min,  0 users,  load average: 0.55, 0.14, 0.05
	Linux test-preload-680081 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f7bfa44fd30352b8e1e740792d0b152bd76ebdf3e57ec0715d8513d689a5e10e] <==
	I1124 09:36:17.803010       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:36:17.824987       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:36:17.835062       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1124 09:36:17.835104       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 09:36:17.835191       1 shared_informer.go:320] Caches are synced for configmaps
	I1124 09:36:17.835215       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1124 09:36:17.835815       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1124 09:36:17.835910       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:36:17.836343       1 aggregator.go:171] initial CRD sync complete...
	I1124 09:36:17.836367       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:36:17.836376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:36:17.836382       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:36:17.841519       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1124 09:36:17.841551       1 policy_source.go:240] refreshing policies
	E1124 09:36:17.845178       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:36:17.846813       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:36:18.353859       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1124 09:36:18.708032       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:36:19.597142       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1124 09:36:19.643249       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1124 09:36:19.674179       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:36:19.683106       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:36:21.012955       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1124 09:36:21.255135       1 controller.go:615] quota admission added evaluator for: endpoints
	I1124 09:36:21.412485       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e7a2449e91d2411872d949a2babfcc174c9ab76162dc51637454c829a1ab10e1] <==
	I1124 09:36:21.003772       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 09:36:21.003857       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 09:36:21.004082       1 shared_informer.go:320] Caches are synced for ephemeral
	I1124 09:36:21.008427       1 shared_informer.go:320] Caches are synced for resource quota
	I1124 09:36:21.008562       1 shared_informer.go:320] Caches are synced for resource quota
	I1124 09:36:21.008688       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1124 09:36:21.009708       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1124 09:36:21.014924       1 shared_informer.go:320] Caches are synced for node
	I1124 09:36:21.015381       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 09:36:21.015481       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 09:36:21.014927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1124 09:36:21.015537       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1124 09:36:21.015755       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1124 09:36:21.015860       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-680081"
	I1124 09:36:21.018575       1 shared_informer.go:320] Caches are synced for daemon sets
	I1124 09:36:21.022088       1 shared_informer.go:320] Caches are synced for deployment
	I1124 09:36:21.022891       1 shared_informer.go:320] Caches are synced for PVC protection
	I1124 09:36:21.025036       1 shared_informer.go:320] Caches are synced for persistent volume
	I1124 09:36:21.026082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="70.615032ms"
	I1124 09:36:21.026162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="39.734µs"
	I1124 09:36:21.030254       1 shared_informer.go:320] Caches are synced for garbage collector
	I1124 09:36:21.034559       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1124 09:36:22.461377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.12153ms"
	I1124 09:36:27.389603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.699044ms"
	I1124 09:36:27.390471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="293.382µs"
	
	
	==> kube-proxy [d25e79138509f6346749e8251b478fb90668b1050cdbd7a2305aa1b27cd6fc8f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1124 09:36:19.084993       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1124 09:36:19.096641       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	E1124 09:36:19.096786       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:36:19.131591       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1124 09:36:19.131667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 09:36:19.131699       1 server_linux.go:170] "Using iptables Proxier"
	I1124 09:36:19.134239       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:36:19.134748       1 server.go:497] "Version info" version="v1.32.0"
	I1124 09:36:19.134798       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:36:19.136426       1 config.go:199] "Starting service config controller"
	I1124 09:36:19.136462       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1124 09:36:19.136488       1 config.go:105] "Starting endpoint slice config controller"
	I1124 09:36:19.136506       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1124 09:36:19.139215       1 config.go:329] "Starting node config controller"
	I1124 09:36:19.139354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1124 09:36:19.236633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1124 09:36:19.236680       1 shared_informer.go:320] Caches are synced for service config
	I1124 09:36:19.239733       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f49a922d4e38ff2ca9e8dea3e094e6e0d8b61b3be874d54c61974642b7ef5389] <==
	I1124 09:36:16.458114       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:36:17.737550       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:36:17.737695       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:36:17.737724       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:36:17.737802       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:36:17.815089       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1124 09:36:17.815132       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:36:17.826983       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1124 09:36:17.828353       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:36:17.829557       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 09:36:17.828375       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:36:17.930665       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: E1124 09:36:17.900121    1167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-680081\" already exists" pod="kube-system/kube-controller-manager-test-preload-680081"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: I1124 09:36:17.900171    1167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-680081"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: E1124 09:36:17.917198    1167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-680081\" already exists" pod="kube-system/kube-scheduler-test-preload-680081"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: I1124 09:36:17.936357    1167 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-680081"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: I1124 09:36:17.936473    1167 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-680081"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: I1124 09:36:17.936507    1167 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: I1124 09:36:17.937529    1167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:36:17 test-preload-680081 kubelet[1167]: I1124 09:36:17.938548    1167 setters.go:602] "Node became not ready" node="test-preload-680081" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T09:36:17Z","lastTransitionTime":"2025-11-24T09:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: I1124 09:36:18.320119    1167 apiserver.go:52] "Watching apiserver"
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: E1124 09:36:18.325518    1167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-6mbhd" podUID="9df56161-50bd-4de0-97ed-3624b1945f49"
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: I1124 09:36:18.341455    1167 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: I1124 09:36:18.348093    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d54c7e21-bddf-4288-9ebe-86cef0aff52e-xtables-lock\") pod \"kube-proxy-nkbcj\" (UID: \"d54c7e21-bddf-4288-9ebe-86cef0aff52e\") " pod="kube-system/kube-proxy-nkbcj"
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: I1124 09:36:18.348134    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4556911e-0f91-4db3-b3a7-8ac95ef42d23-tmp\") pod \"storage-provisioner\" (UID: \"4556911e-0f91-4db3-b3a7-8ac95ef42d23\") " pod="kube-system/storage-provisioner"
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: I1124 09:36:18.348153    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d54c7e21-bddf-4288-9ebe-86cef0aff52e-lib-modules\") pod \"kube-proxy-nkbcj\" (UID: \"d54c7e21-bddf-4288-9ebe-86cef0aff52e\") " pod="kube-system/kube-proxy-nkbcj"
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: E1124 09:36:18.348967    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: E1124 09:36:18.349547    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9df56161-50bd-4de0-97ed-3624b1945f49-config-volume podName:9df56161-50bd-4de0-97ed-3624b1945f49 nodeName:}" failed. No retries permitted until 2025-11-24 09:36:18.849519425 +0000 UTC m=+5.619510428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9df56161-50bd-4de0-97ed-3624b1945f49-config-volume") pod "coredns-668d6bf9bc-6mbhd" (UID: "9df56161-50bd-4de0-97ed-3624b1945f49") : object "kube-system"/"coredns" not registered
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: E1124 09:36:18.852119    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 09:36:18 test-preload-680081 kubelet[1167]: E1124 09:36:18.852188    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9df56161-50bd-4de0-97ed-3624b1945f49-config-volume podName:9df56161-50bd-4de0-97ed-3624b1945f49 nodeName:}" failed. No retries permitted until 2025-11-24 09:36:19.852174742 +0000 UTC m=+6.622165733 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9df56161-50bd-4de0-97ed-3624b1945f49-config-volume") pod "coredns-668d6bf9bc-6mbhd" (UID: "9df56161-50bd-4de0-97ed-3624b1945f49") : object "kube-system"/"coredns" not registered
	Nov 24 09:36:19 test-preload-680081 kubelet[1167]: E1124 09:36:19.369472    1167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-6mbhd" podUID="9df56161-50bd-4de0-97ed-3624b1945f49"
	Nov 24 09:36:19 test-preload-680081 kubelet[1167]: I1124 09:36:19.821849    1167 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 24 09:36:19 test-preload-680081 kubelet[1167]: E1124 09:36:19.862225    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 09:36:19 test-preload-680081 kubelet[1167]: E1124 09:36:19.862355    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9df56161-50bd-4de0-97ed-3624b1945f49-config-volume podName:9df56161-50bd-4de0-97ed-3624b1945f49 nodeName:}" failed. No retries permitted until 2025-11-24 09:36:21.862270977 +0000 UTC m=+8.632261964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9df56161-50bd-4de0-97ed-3624b1945f49-config-volume") pod "coredns-668d6bf9bc-6mbhd" (UID: "9df56161-50bd-4de0-97ed-3624b1945f49") : object "kube-system"/"coredns" not registered
	Nov 24 09:36:23 test-preload-680081 kubelet[1167]: E1124 09:36:23.400062    1167 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763976983398772684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 09:36:23 test-preload-680081 kubelet[1167]: E1124 09:36:23.400106    1167 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763976983398772684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 09:36:27 test-preload-680081 kubelet[1167]: I1124 09:36:27.359118    1167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [6ee7cba8dd9479862553b720517a7e1a0aa594bf8a291cf4ee9e8f74fdd2ef5d] <==
	I1124 09:36:19.018032       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-680081 -n test-preload-680081
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-680081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-680081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-680081
--- FAIL: TestPreload (153.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (415.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-377882 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-377882 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m51.890632908s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-377882] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-377882" primary control-plane node in "pause-377882" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-377882" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:41:04.647788   45116 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:41:04.648048   45116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:41:04.648056   45116 out.go:374] Setting ErrFile to fd 2...
	I1124 09:41:04.648060   45116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:41:04.648290   45116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:41:04.648722   45116 out.go:368] Setting JSON to false
	I1124 09:41:04.649586   45116 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5001,"bootTime":1763972264,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:41:04.649675   45116 start.go:143] virtualization: kvm guest
	I1124 09:41:04.651952   45116 out.go:179] * [pause-377882] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:41:04.653304   45116 notify.go:221] Checking for updates...
	I1124 09:41:04.653350   45116 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:41:04.654694   45116 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:41:04.656022   45116 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:41:04.657264   45116 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 09:41:04.658403   45116 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:41:04.659600   45116 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:41:04.661238   45116 config.go:182] Loaded profile config "pause-377882": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:41:04.661757   45116 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:41:04.694629   45116 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 09:41:04.695801   45116 start.go:309] selected driver: kvm2
	I1124 09:41:04.695818   45116 start.go:927] validating driver "kvm2" against &{Name:pause-377882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-377882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:41:04.695937   45116 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:41:04.696826   45116 cni.go:84] Creating CNI manager for ""
	I1124 09:41:04.696888   45116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:41:04.696936   45116 start.go:353] cluster config:
	{Name:pause-377882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-377882 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:41:04.697044   45116 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:41:04.698569   45116 out.go:179] * Starting "pause-377882" primary control-plane node in "pause-377882" cluster
	I1124 09:41:04.699609   45116 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:41:04.699640   45116 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 09:41:04.699647   45116 cache.go:65] Caching tarball of preloaded images
	I1124 09:41:04.699738   45116 preload.go:238] Found /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 09:41:04.699749   45116 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:41:04.699842   45116 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/config.json ...
	I1124 09:41:04.700045   45116 start.go:360] acquireMachinesLock for pause-377882: {Name:mk7b5988e566cc8ac324d849b09ff116b4f24553 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 09:41:27.353304   45116 start.go:364] duration metric: took 22.653186444s to acquireMachinesLock for "pause-377882"
	I1124 09:41:27.353373   45116 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:41:27.353383   45116 fix.go:54] fixHost starting: 
	I1124 09:41:27.356320   45116 fix.go:112] recreateIfNeeded on pause-377882: state=Running err=<nil>
	W1124 09:41:27.356350   45116 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:41:27.357764   45116 out.go:252] * Updating the running kvm2 "pause-377882" VM ...
	I1124 09:41:27.357803   45116 machine.go:94] provisionDockerMachine start ...
	I1124 09:41:27.363104   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.363742   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.363779   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.363971   45116 main.go:143] libmachine: Using SSH client type: native
	I1124 09:41:27.364239   45116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1124 09:41:27.364253   45116 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:41:27.466950   45116 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-377882
	
	I1124 09:41:27.467044   45116 buildroot.go:166] provisioning hostname "pause-377882"
	I1124 09:41:27.470493   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.470902   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.470928   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.471146   45116 main.go:143] libmachine: Using SSH client type: native
	I1124 09:41:27.471371   45116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1124 09:41:27.471384   45116 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-377882 && echo "pause-377882" | sudo tee /etc/hostname
	I1124 09:41:27.597287   45116 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-377882
	
	I1124 09:41:27.600275   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.600738   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.600770   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.600968   45116 main.go:143] libmachine: Using SSH client type: native
	I1124 09:41:27.601281   45116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1124 09:41:27.601306   45116 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-377882' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-377882/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-377882' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:41:27.703113   45116 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:41:27.703182   45116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 09:41:27.703218   45116 buildroot.go:174] setting up certificates
	I1124 09:41:27.703239   45116 provision.go:84] configureAuth start
	I1124 09:41:27.706373   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.706830   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.706862   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.709174   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.709562   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.709585   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.709708   45116 provision.go:143] copyHostCerts
	I1124 09:41:27.709763   45116 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem, removing ...
	I1124 09:41:27.709787   45116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem
	I1124 09:41:27.709845   45116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 09:41:27.709938   45116 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem, removing ...
	I1124 09:41:27.709946   45116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem
	I1124 09:41:27.709967   45116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 09:41:27.710016   45116 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem, removing ...
	I1124 09:41:27.710023   45116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem
	I1124 09:41:27.710040   45116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 09:41:27.710084   45116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.pause-377882 san=[127.0.0.1 192.168.39.144 localhost minikube pause-377882]
	I1124 09:41:27.792343   45116 provision.go:177] copyRemoteCerts
	I1124 09:41:27.792406   45116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:41:27.795527   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.796044   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.796074   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.796268   45116 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/pause-377882/id_rsa Username:docker}
	I1124 09:41:27.884300   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:41:27.920966   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:41:27.954224   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 09:41:27.988909   45116 provision.go:87] duration metric: took 285.65108ms to configureAuth
	I1124 09:41:27.988942   45116 buildroot.go:189] setting minikube options for container-runtime
	I1124 09:41:27.989199   45116 config.go:182] Loaded profile config "pause-377882": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:41:27.992607   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.993066   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:27.993090   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:27.993305   45116 main.go:143] libmachine: Using SSH client type: native
	I1124 09:41:27.993588   45116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1124 09:41:27.993613   45116 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:41:33.594922   45116 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:41:33.594949   45116 machine.go:97] duration metric: took 6.237138134s to provisionDockerMachine
	I1124 09:41:33.594961   45116 start.go:293] postStartSetup for "pause-377882" (driver="kvm2")
	I1124 09:41:33.594972   45116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:41:33.595049   45116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:41:33.598022   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.598504   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:33.598541   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.598710   45116 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/pause-377882/id_rsa Username:docker}
	I1124 09:41:33.696594   45116 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:41:33.702670   45116 info.go:137] Remote host: Buildroot 2025.02
	I1124 09:41:33.702706   45116 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 09:41:33.702782   45116 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 09:41:33.702886   45116 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem -> 96292.pem in /etc/ssl/certs
	I1124 09:41:33.703037   45116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:41:33.718894   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:41:33.754971   45116 start.go:296] duration metric: took 159.996026ms for postStartSetup
	I1124 09:41:33.755043   45116 fix.go:56] duration metric: took 6.401640377s for fixHost
	I1124 09:41:33.758423   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.758829   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:33.758862   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.759111   45116 main.go:143] libmachine: Using SSH client type: native
	I1124 09:41:33.759387   45116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1124 09:41:33.759399   45116 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 09:41:33.872598   45116 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763977293.869831391
	
	I1124 09:41:33.872628   45116 fix.go:216] guest clock: 1763977293.869831391
	I1124 09:41:33.872639   45116 fix.go:229] Guest: 2025-11-24 09:41:33.869831391 +0000 UTC Remote: 2025-11-24 09:41:33.755050029 +0000 UTC m=+29.160662435 (delta=114.781362ms)
	I1124 09:41:33.872663   45116 fix.go:200] guest clock delta is within tolerance: 114.781362ms
	I1124 09:41:33.872670   45116 start.go:83] releasing machines lock for "pause-377882", held for 6.519318312s
	I1124 09:41:33.876811   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.877411   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:33.877445   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.878376   45116 ssh_runner.go:195] Run: cat /version.json
	I1124 09:41:33.878451   45116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:41:33.884998   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.885062   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.885703   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:33.885741   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.885898   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:41:33.885934   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:41:33.886329   45116 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/pause-377882/id_rsa Username:docker}
	I1124 09:41:33.886653   45116 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/pause-377882/id_rsa Username:docker}
	I1124 09:41:34.012818   45116 ssh_runner.go:195] Run: systemctl --version
	I1124 09:41:34.020754   45116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:41:34.251358   45116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:41:34.264723   45116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:41:34.264868   45116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:41:34.293656   45116 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:41:34.293691   45116 start.go:496] detecting cgroup driver to use...
	I1124 09:41:34.293773   45116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:41:34.343307   45116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:41:34.385489   45116 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:41:34.385590   45116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:41:34.420153   45116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:41:34.463367   45116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:41:34.776762   45116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:41:35.025707   45116 docker.go:234] disabling docker service ...
	I1124 09:41:35.025779   45116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:41:35.058350   45116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:41:35.074807   45116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:41:35.298994   45116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:41:35.624411   45116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:41:35.674876   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:41:35.731808   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:41:36.036444   45116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:41:36.036578   45116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.060854   45116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:41:36.060937   45116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.095305   45116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.129433   45116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.157272   45116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:41:36.190629   45116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.225760   45116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.296269   45116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:41:36.358507   45116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:41:36.408230   45116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:41:36.468326   45116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:41:36.907448   45116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:43:07.354443   45116 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.446932433s)
	I1124 09:43:07.354485   45116 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:43:07.354552   45116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:43:07.360282   45116 start.go:564] Will wait 60s for crictl version
	I1124 09:43:07.360371   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:43:07.365421   45116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 09:43:07.407669   45116 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 09:43:07.407785   45116 ssh_runner.go:195] Run: crio --version
	I1124 09:43:07.436672   45116 ssh_runner.go:195] Run: crio --version
	I1124 09:43:07.474988   45116 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1124 09:43:07.479320   45116 main.go:143] libmachine: domain pause-377882 has defined MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:43:07.479759   45116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:97:d4:51", ip: ""} in network mk-pause-377882: {Iface:virbr1 ExpiryTime:2025-11-24 10:40:17 +0000 UTC Type:0 Mac:52:54:00:97:d4:51 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:pause-377882 Clientid:01:52:54:00:97:d4:51}
	I1124 09:43:07.479783   45116 main.go:143] libmachine: domain pause-377882 has defined IP address 192.168.39.144 and MAC address 52:54:00:97:d4:51 in network mk-pause-377882
	I1124 09:43:07.479962   45116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 09:43:07.485746   45116 kubeadm.go:884] updating cluster {Name:pause-377882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-377882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:43:07.485957   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:43:07.768956   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:43:08.047130   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:43:08.324259   45116 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:43:08.324421   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:43:08.609192   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:43:08.921850   45116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:43:09.231568   45116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:43:09.332536   45116 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:43:09.332566   45116 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:43:09.332626   45116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:43:09.396338   45116 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:43:09.396368   45116 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:43:09.396379   45116 kubeadm.go:935] updating node { 192.168.39.144 8443 v1.34.2 crio true true} ...
	I1124 09:43:09.396497   45116 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-377882 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-377882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:43:09.396583   45116 ssh_runner.go:195] Run: crio config
	I1124 09:43:09.465390   45116 cni.go:84] Creating CNI manager for ""
	I1124 09:43:09.465422   45116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:43:09.465441   45116 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:43:09.465469   45116 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-377882 NodeName:pause-377882 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:43:09.465629   45116 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-377882"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.144"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:43:09.465699   45116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:43:09.483945   45116 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:43:09.484021   45116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:43:09.510229   45116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1124 09:43:09.578055   45116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:43:09.625668   45116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 09:43:09.667954   45116 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I1124 09:43:09.674950   45116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:43:09.917579   45116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:43:09.938913   45116 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882 for IP: 192.168.39.144
	I1124 09:43:09.938941   45116 certs.go:195] generating shared ca certs ...
	I1124 09:43:09.938960   45116 certs.go:227] acquiring lock for ca certs: {Name:mkc847d4fb6fb61872e24a1bb00356ff9ef1a409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:43:09.939143   45116 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key
	I1124 09:43:09.939203   45116 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key
	I1124 09:43:09.939214   45116 certs.go:257] generating profile certs ...
	I1124 09:43:09.939292   45116 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/client.key
	I1124 09:43:09.939353   45116 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/apiserver.key.d3e18ad4
	I1124 09:43:09.939437   45116 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/proxy-client.key
	I1124 09:43:09.939557   45116 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem (1338 bytes)
	W1124 09:43:09.939605   45116 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629_empty.pem, impossibly tiny 0 bytes
	I1124 09:43:09.939614   45116 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:43:09.939651   45116 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:43:09.939691   45116 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:43:09.939731   45116 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem (1675 bytes)
	I1124 09:43:09.939816   45116 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:43:09.940665   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:43:09.996287   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:43:10.052803   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:43:10.108846   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:43:10.162813   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 09:43:10.215828   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:43:10.268091   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:43:10.323087   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:43:10.376110   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:43:10.431855   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem --> /usr/share/ca-certificates/9629.pem (1338 bytes)
	I1124 09:43:10.466274   45116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /usr/share/ca-certificates/96292.pem (1708 bytes)
	I1124 09:43:10.509620   45116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:43:10.550045   45116 ssh_runner.go:195] Run: openssl version
	I1124 09:43:10.564550   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9629.pem && ln -fs /usr/share/ca-certificates/9629.pem /etc/ssl/certs/9629.pem"
	I1124 09:43:10.586297   45116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9629.pem
	I1124 09:43:10.593150   45116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:42 /usr/share/ca-certificates/9629.pem
	I1124 09:43:10.593238   45116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9629.pem
	I1124 09:43:10.603644   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9629.pem /etc/ssl/certs/51391683.0"
	I1124 09:43:10.623370   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96292.pem && ln -fs /usr/share/ca-certificates/96292.pem /etc/ssl/certs/96292.pem"
	I1124 09:43:10.645887   45116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96292.pem
	I1124 09:43:10.654319   45116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:42 /usr/share/ca-certificates/96292.pem
	I1124 09:43:10.654402   45116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96292.pem
	I1124 09:43:10.662925   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:43:10.682751   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:43:10.703823   45116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:43:10.710357   45116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:43:10.710426   45116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:43:10.720916   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:43:10.738868   45116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:43:10.746443   45116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:43:10.758846   45116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:43:10.778487   45116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:43:10.788828   45116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:43:10.800922   45116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:43:10.810784   45116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:43:10.822065   45116 kubeadm.go:401] StartCluster: {Name:pause-377882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-377882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:43:10.822237   45116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:43:10.822361   45116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:43:10.897296   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:43:10.897324   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:43:10.897330   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:43:10.897336   45116 cri.go:89] found id: "f9370a426ab81f9e9b2ab166d2e6167e1a3e4ab7c4a3af105022c772712be789"
	I1124 09:43:10.897341   45116 cri.go:89] found id: "2b845feab3a4e8b429e32bf8b0e7f1929ec879767981a3529c4550851d2bd5fd"
	I1124 09:43:10.897347   45116 cri.go:89] found id: "763f89d49ea3ce527f87bb614a508fcfc777e63d82f76df714e4347e1295e9c9"
	I1124 09:43:10.897352   45116 cri.go:89] found id: "2204115294a6310c27a2e93a64eeb42baada3f84dc9beb6d235a358dbd2bba5b"
	I1124 09:43:10.897357   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:43:10.897361   45116 cri.go:89] found id: "407fe6e69fa82bf39ba93d196f2f29f7cc67487df28162a924efe42c3a1f38f1"
	I1124 09:43:10.897371   45116 cri.go:89] found id: "62c8368e9d86e09c9d1a1ca137623b3468e83f8349be9dac6be508e9813bc3be"
	I1124 09:43:10.897380   45116 cri.go:89] found id: "585e1b4427fe231251eae786576d10151efdb670887075e981f72c75ef2b47c5"
	I1124 09:43:10.897385   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:43:10.897392   45116 cri.go:89] found id: "83f84ff16e7f72ce64205aff35b910bd9d13068835d2817d5ce0dc895e0f9226"
	I1124 09:43:10.897398   45116 cri.go:89] found id: "477b5b7d5575bdc4b47dd6ea9633b168a9f7f830afdbce7b91c2db93d4998757"
	I1124 09:43:10.897407   45116 cri.go:89] found id: "1efdf858a85891cb03247fc882d16809e27a1115e683e2f33a7019b05f42f5fc"
	I1124 09:43:10.897414   45116 cri.go:89] found id: ""
	I1124 09:43:10.897470   45116 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-377882 -n pause-377882
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-377882 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-377882 logs -n 25: (1.315098527s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p cert-options-322176 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:42 UTC │ 24 Nov 25 09:43 UTC │
	│ ssh     │ -p NoKubernetes-544416 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │                     │
	│ stop    │ -p NoKubernetes-544416                                                                                                                                                                                                                      │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ start   │ -p NoKubernetes-544416 --driver=kvm2  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ ssh     │ -p NoKubernetes-544416 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │                     │
	│ delete  │ -p NoKubernetes-544416                                                                                                                                                                                                                      │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ start   │ -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:45 UTC │
	│ ssh     │ cert-options-322176 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ ssh     │ -p cert-options-322176 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ delete  │ -p cert-options-322176                                                                                                                                                                                                                      │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:45 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-960867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:45 UTC │
	│ stop    │ -p old-k8s-version-960867 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-778378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:45 UTC │
	│ stop    │ -p no-preload-778378 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:47 UTC │
	│ start   │ -p cert-expiration-986811 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-986811 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-960867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:46 UTC │
	│ start   │ -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:47 UTC │
	│ delete  │ -p cert-expiration-986811                                                                                                                                                                                                                   │ cert-expiration-986811 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:46 UTC │
	│ start   │ -p embed-certs-626350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-626350     │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-778378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ start   │ -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │                     │
	│ image   │ old-k8s-version-960867 image list --format=json                                                                                                                                                                                             │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ pause   │ -p old-k8s-version-960867 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ unpause │ -p old-k8s-version-960867 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:47:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:47:12.259632   49468 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:47:12.259769   49468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:47:12.259780   49468 out.go:374] Setting ErrFile to fd 2...
	I1124 09:47:12.259786   49468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:47:12.260126   49468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:47:12.260737   49468 out.go:368] Setting JSON to false
	I1124 09:47:12.261966   49468 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5368,"bootTime":1763972264,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:47:12.262054   49468 start.go:143] virtualization: kvm guest
	I1124 09:47:12.264154   49468 out.go:179] * [no-preload-778378] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:47:12.265498   49468 notify.go:221] Checking for updates...
	I1124 09:47:12.265516   49468 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:47:12.266992   49468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:47:12.268427   49468 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:12.269748   49468 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 09:47:12.270974   49468 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:47:12.272264   49468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:47:12.273852   49468 config.go:182] Loaded profile config "no-preload-778378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:12.274329   49468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:47:12.318985   49468 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 09:47:12.320306   49468 start.go:309] selected driver: kvm2
	I1124 09:47:12.320328   49468 start.go:927] validating driver "kvm2" against &{Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:12.320475   49468 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:47:12.321966   49468 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:12.322013   49468 cni.go:84] Creating CNI manager for ""
	I1124 09:47:12.322081   49468 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:12.322130   49468 start.go:353] cluster config:
	{Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:12.322287   49468 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:12.324718   49468 out.go:179] * Starting "no-preload-778378" primary control-plane node in "no-preload-778378" cluster
	I1124 09:47:09.795644   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:09.796396   49230 main.go:143] libmachine: no network interface addresses found for domain embed-certs-626350 (source=lease)
	I1124 09:47:09.796416   49230 main.go:143] libmachine: trying to list again with source=arp
	I1124 09:47:09.796822   49230 main.go:143] libmachine: unable to find current IP address of domain embed-certs-626350 in network mk-embed-certs-626350 (interfaces detected: [])
	I1124 09:47:09.796856   49230 retry.go:31] will retry after 1.912431309s: waiting for domain to come up
	I1124 09:47:11.711443   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:11.712106   49230 main.go:143] libmachine: no network interface addresses found for domain embed-certs-626350 (source=lease)
	I1124 09:47:11.712122   49230 main.go:143] libmachine: trying to list again with source=arp
	I1124 09:47:11.712511   49230 main.go:143] libmachine: unable to find current IP address of domain embed-certs-626350 in network mk-embed-certs-626350 (interfaces detected: [])
	I1124 09:47:11.712547   49230 retry.go:31] will retry after 3.15029127s: waiting for domain to come up
	I1124 09:47:09.691398   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:09.691423   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:09.691429   45116 cri.go:89] found id: ""
	I1124 09:47:09.691437   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:09.691510   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:09.698387   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:09.704968   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:09.705033   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:09.739216   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:09.739244   45116 cri.go:89] found id: ""
	I1124 09:47:09.739255   45116 logs.go:282] 1 containers: [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:09.739327   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:09.743747   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:09.743824   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:09.782911   45116 cri.go:89] found id: ""
	I1124 09:47:09.782938   45116 logs.go:282] 0 containers: []
	W1124 09:47:09.782947   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:09.782956   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:09.782967   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:09.828128   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:09.828183   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:10.161168   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:10.161215   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:10.207459   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:10.207509   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:10.347266   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:10.347322   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:10.370233   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:10.370274   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:10.451901   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:10.451922   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:10.451936   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:10.488059   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:10.488098   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:10.561847   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:10.561882   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:10.645070   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:10.645127   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:10.736900   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:10.736960   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:10.801138   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:10.801193   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:10.859639   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:10.859679   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:10.910532   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:10.910573   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:13.464303   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:13.490945   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:47:13.491026   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:47:13.525218   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:13.525244   45116 cri.go:89] found id: ""
	I1124 09:47:13.525254   45116 logs.go:282] 1 containers: [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a]
	I1124 09:47:13.525316   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.530122   45116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:47:13.530220   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:47:13.566985   45116 cri.go:89] found id: "644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:13.567013   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:13.567019   45116 cri.go:89] found id: ""
	I1124 09:47:13.567028   45116 logs.go:282] 2 containers: [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2]
	I1124 09:47:13.567091   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.575704   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.580061   45116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:47:13.580141   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:47:13.612702   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:13.612730   45116 cri.go:89] found id: ""
	I1124 09:47:13.612749   45116 logs.go:282] 1 containers: [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154]
	I1124 09:47:13.612813   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.617252   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:47:13.617323   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:47:13.654192   45116 cri.go:89] found id: "af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:13.654219   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:13.654226   45116 cri.go:89] found id: ""
	I1124 09:47:13.654235   45116 logs.go:282] 2 containers: [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560]
	I1124 09:47:13.654298   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.660068   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.664712   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:47:13.664789   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:47:13.714327   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:13.714354   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:13.714367   45116 cri.go:89] found id: ""
	I1124 09:47:13.714376   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:13.714436   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.721423   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.727043   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:13.727129   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:13.773234   45116 cri.go:89] found id: "20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:13.773265   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:13.773271   45116 cri.go:89] found id: ""
	I1124 09:47:13.773280   45116 logs.go:282] 2 containers: [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:13.773356   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.779580   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.784971   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:13.785042   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:13.819859   45116 cri.go:89] found id: ""
	I1124 09:47:13.819895   45116 logs.go:282] 0 containers: []
	W1124 09:47:13.819909   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:13.819928   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:13.819949   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:13.864915   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:13.864951   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:13.955155   45116 logs.go:123] Gathering logs for kube-controller-manager [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602] ...
	I1124 09:47:13.955199   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:13.999353   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:13.999386   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:14.112379   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:14.112416   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:14.130303   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:14.130332   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:14.203124   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:14.203172   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:14.252841   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:14.252885   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:14.289469   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:14.289528   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:14.326709   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:14.326749   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:14.378123   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:14.378181   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:14.427299   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:14.427330   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:14.480509   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:14.480558   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:14.524631   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:14.524662   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:09.896468   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:10.395925   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:10.895591   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:10.941745   49070 api_server.go:72] duration metric: took 3.046449146s to wait for apiserver process to appear ...
	I1124 09:47:10.941784   49070 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:10.941805   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:13.591244   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:47:13.591278   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:47:13.591296   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:13.673996   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:47:13.674034   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:47:13.942381   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:13.960813   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:47:13.960850   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:47:14.442327   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:14.449600   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:47:14.449630   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:47:14.942575   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:14.947180   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 200:
	ok
	I1124 09:47:14.954012   49070 api_server.go:141] control plane version: v1.28.0
	I1124 09:47:14.954039   49070 api_server.go:131] duration metric: took 4.012247353s to wait for apiserver health ...
	I1124 09:47:14.954052   49070 cni.go:84] Creating CNI manager for ""
	I1124 09:47:14.954060   49070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:14.955770   49070 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:47:14.956971   49070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:47:14.970168   49070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:47:14.994474   49070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:15.002290   49070 system_pods.go:59] 8 kube-system pods found
	I1124 09:47:15.002319   49070 system_pods.go:61] "coredns-5dd5756b68-qjfrd" [4fd2b02c-5aae-488b-ab0c-c607053b2c61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:15.002327   49070 system_pods.go:61] "etcd-old-k8s-version-960867" [cd6416ef-d54b-45e0-b6a4-b42bcc4e02c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:15.002335   49070 system_pods.go:61] "kube-apiserver-old-k8s-version-960867" [156bcf7a-4753-4df7-b930-852c4e0b254d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:47:15.002343   49070 system_pods.go:61] "kube-controller-manager-old-k8s-version-960867" [77928b09-b20f-4328-8ef0-1545a4fe215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:15.002348   49070 system_pods.go:61] "kube-proxy-lmg4n" [d8bf94d7-0452-410a-9471-be83743449f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:15.002354   49070 system_pods.go:61] "kube-scheduler-old-k8s-version-960867" [f0eec69c-765c-4c84-b554-8236cc26249c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:15.002366   49070 system_pods.go:61] "metrics-server-57f55c9bc5-lbrng" [4b2cfd75-974a-4544-b013-3b8daa376685] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:47:15.002376   49070 system_pods.go:61] "storage-provisioner" [71f29f3d-5b04-4cb9-aab8-233ad3e7fdab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:15.002382   49070 system_pods.go:74] duration metric: took 7.889088ms to wait for pod list to return data ...
	I1124 09:47:15.002391   49070 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:15.009889   49070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:15.009914   49070 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:15.009927   49070 node_conditions.go:105] duration metric: took 7.532244ms to run NodePressure ...
	I1124 09:47:15.009971   49070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:15.235683   49070 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 09:47:15.239554   49070 kubeadm.go:744] kubelet initialised
	I1124 09:47:15.239578   49070 kubeadm.go:745] duration metric: took 3.874474ms waiting for restarted kubelet to initialise ...
	I1124 09:47:15.239592   49070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:47:15.257209   49070 ops.go:34] apiserver oom_adj: -16
	I1124 09:47:15.257231   49070 kubeadm.go:602] duration metric: took 8.669905687s to restartPrimaryControlPlane
	I1124 09:47:15.257240   49070 kubeadm.go:403] duration metric: took 8.723012978s to StartCluster
	I1124 09:47:15.257255   49070 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:15.257317   49070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:15.258046   49070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:15.258267   49070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.182 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:15.258334   49070 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:15.258431   49070 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258448   49070 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-960867"
	W1124 09:47:15.258456   49070 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:47:15.258482   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	I1124 09:47:15.258461   49070 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258509   49070 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-960867"
	I1124 09:47:15.258505   49070 addons.go:70] Setting dashboard=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258524   49070 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258576   49070 addons.go:239] Setting addon dashboard=true in "old-k8s-version-960867"
	W1124 09:47:15.258587   49070 addons.go:248] addon dashboard should already be in state true
	I1124 09:47:15.258598   49070 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-960867"
	I1124 09:47:15.258613   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	W1124 09:47:15.258616   49070 addons.go:248] addon metrics-server should already be in state true
	I1124 09:47:15.258546   49070 config.go:182] Loaded profile config "old-k8s-version-960867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:47:15.258661   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	I1124 09:47:15.259939   49070 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:15.261371   49070 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:47:15.261386   49070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:15.261438   49070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:15.262388   49070 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-960867"
	W1124 09:47:15.262406   49070 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:47:15.262427   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	I1124 09:47:15.262624   49070 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:47:15.262672   49070 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.262684   49070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:15.262631   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:47:15.262730   49070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:47:15.264524   49070 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:15.264539   49070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:15.266016   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266045   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266525   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.266562   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266596   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.266630   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266780   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.266924   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.267639   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.268068   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.268100   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.268282   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.268396   49070 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:47:12.326011   49468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:12.326190   49468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/config.json ...
	I1124 09:47:12.326329   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:12.326476   49468 start.go:360] acquireMachinesLock for no-preload-778378: {Name:mk7b5988e566cc8ac324d849b09ff116b4f24553 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 09:47:12.620410   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:12.919094   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:13.217519   49468 cache.go:107] acquiring lock: {Name:mk873476b8b51c5ad30a5f207562c122a407baa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217516   49468 cache.go:107] acquiring lock: {Name:mkd012b56d6bb314838e8477fa61cbc9a5cb6182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217569   49468 cache.go:107] acquiring lock: {Name:mk7b9d9c6ed27d19c384d6cbe702bfd1c838c06e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217641   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:13.217648   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:13.217626   49468 cache.go:107] acquiring lock: {Name:mk843be7defe78f14bd5310432fc15bd3fb06fcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217652   49468 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 107.897µs
	I1124 09:47:13.217662   49468 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:13.217659   49468 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 164.768µs
	I1124 09:47:13.217630   49468 cache.go:107] acquiring lock: {Name:mk59e7d3324e6d5caf067ed3caccff0e089892d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217677   49468 cache.go:107] acquiring lock: {Name:mk8faa0d7d5001227c8e0f6859d07215668f8c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217684   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:13.217693   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:13.217677   49468 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217697   49468 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 73.562µs
	I1124 09:47:13.217708   49468 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217705   49468 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 209.868µs
	I1124 09:47:13.217719   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:13.217683   49468 cache.go:107] acquiring lock: {Name:mk25a8e984499d9056c7556923373a6a0424ac0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217727   49468 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 135.845µs
	I1124 09:47:13.217734   49468 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:13.217778   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:13.217691   49468 cache.go:107] acquiring lock: {Name:mkc9a0c6b55838e55cce5ad7bc53cddbd14b524c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217799   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:13.217797   49468 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 121.913µs
	I1124 09:47:13.217814   49468 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 189.002µs
	I1124 09:47:13.217824   49468 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:13.217829   49468 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217723   49468 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:13.217884   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:13.217905   49468 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 261.163µs
	I1124 09:47:13.217913   49468 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217932   49468 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:14.864324   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:14.865044   49230 main.go:143] libmachine: no network interface addresses found for domain embed-certs-626350 (source=lease)
	I1124 09:47:14.865056   49230 main.go:143] libmachine: trying to list again with source=arp
	I1124 09:47:14.865478   49230 main.go:143] libmachine: unable to find current IP address of domain embed-certs-626350 in network mk-embed-certs-626350 (interfaces detected: [])
	I1124 09:47:14.865510   49230 retry.go:31] will retry after 3.41691704s: waiting for domain to come up
	I1124 09:47:15.269560   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:47:15.269572   49070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:47:15.272007   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.272425   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.272467   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.272646   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.475954   49070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:15.503781   49070 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-960867" to be "Ready" ...
	I1124 09:47:15.631576   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:15.634651   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:47:15.634676   49070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:47:15.642857   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:47:15.642877   49070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:47:15.654687   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.676828   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:47:15.676858   49070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:47:15.730265   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:47:15.730299   49070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:47:15.730496   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:47:15.730526   49070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:47:15.800810   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:47:15.800847   49070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:47:15.812108   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:47:15.812149   49070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:47:15.894307   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:47:15.894341   49070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:47:15.918524   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:47:16.026243   49070 node_ready.go:49] node "old-k8s-version-960867" is "Ready"
	I1124 09:47:16.026284   49070 node_ready.go:38] duration metric: took 522.471936ms for node "old-k8s-version-960867" to be "Ready" ...
	I1124 09:47:16.026303   49070 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:16.026361   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:16.043565   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:47:16.043594   49070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:47:16.169019   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:47:16.169049   49070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:47:16.244520   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:47:16.244543   49070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:47:16.332639   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:47:16.332691   49070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:47:16.382826   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:47:17.218202   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.586590141s)
	I1124 09:47:17.620566   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.96583784s)
	I1124 09:47:17.861066   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.942492809s)
	I1124 09:47:17.861119   49070 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-960867"
	I1124 09:47:17.861074   49070 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.834691962s)
	I1124 09:47:17.861150   49070 api_server.go:72] duration metric: took 2.602858171s to wait for apiserver process to appear ...
	I1124 09:47:17.861180   49070 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:17.861216   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:17.875545   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 200:
	ok
	I1124 09:47:17.878077   49070 api_server.go:141] control plane version: v1.28.0
	I1124 09:47:17.878113   49070 api_server.go:131] duration metric: took 16.912947ms to wait for apiserver health ...
	I1124 09:47:17.878127   49070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:17.888070   49070 system_pods.go:59] 8 kube-system pods found
	I1124 09:47:17.888101   49070 system_pods.go:61] "coredns-5dd5756b68-qjfrd" [4fd2b02c-5aae-488b-ab0c-c607053b2c61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:17.888109   49070 system_pods.go:61] "etcd-old-k8s-version-960867" [cd6416ef-d54b-45e0-b6a4-b42bcc4e02c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:17.888116   49070 system_pods.go:61] "kube-apiserver-old-k8s-version-960867" [156bcf7a-4753-4df7-b930-852c4e0b254d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:47:17.888121   49070 system_pods.go:61] "kube-controller-manager-old-k8s-version-960867" [77928b09-b20f-4328-8ef0-1545a4fe215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:17.888126   49070 system_pods.go:61] "kube-proxy-lmg4n" [d8bf94d7-0452-410a-9471-be83743449f4] Running
	I1124 09:47:17.888140   49070 system_pods.go:61] "kube-scheduler-old-k8s-version-960867" [f0eec69c-765c-4c84-b554-8236cc26249c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:17.888147   49070 system_pods.go:61] "metrics-server-57f55c9bc5-lbrng" [4b2cfd75-974a-4544-b013-3b8daa376685] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:47:17.888154   49070 system_pods.go:61] "storage-provisioner" [71f29f3d-5b04-4cb9-aab8-233ad3e7fdab] Running
	I1124 09:47:17.888186   49070 system_pods.go:74] duration metric: took 10.051591ms to wait for pod list to return data ...
	I1124 09:47:17.888198   49070 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:47:17.894210   49070 default_sa.go:45] found service account: "default"
	I1124 09:47:17.894235   49070 default_sa.go:55] duration metric: took 6.032157ms for default service account to be created ...
	I1124 09:47:17.894245   49070 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:47:17.901024   49070 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:17.901062   49070 system_pods.go:89] "coredns-5dd5756b68-qjfrd" [4fd2b02c-5aae-488b-ab0c-c607053b2c61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:17.901073   49070 system_pods.go:89] "etcd-old-k8s-version-960867" [cd6416ef-d54b-45e0-b6a4-b42bcc4e02c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:17.901088   49070 system_pods.go:89] "kube-apiserver-old-k8s-version-960867" [156bcf7a-4753-4df7-b930-852c4e0b254d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:47:17.901103   49070 system_pods.go:89] "kube-controller-manager-old-k8s-version-960867" [77928b09-b20f-4328-8ef0-1545a4fe215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:17.901110   49070 system_pods.go:89] "kube-proxy-lmg4n" [d8bf94d7-0452-410a-9471-be83743449f4] Running
	I1124 09:47:17.901119   49070 system_pods.go:89] "kube-scheduler-old-k8s-version-960867" [f0eec69c-765c-4c84-b554-8236cc26249c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:17.901126   49070 system_pods.go:89] "metrics-server-57f55c9bc5-lbrng" [4b2cfd75-974a-4544-b013-3b8daa376685] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:47:17.901133   49070 system_pods.go:89] "storage-provisioner" [71f29f3d-5b04-4cb9-aab8-233ad3e7fdab] Running
	I1124 09:47:17.901148   49070 system_pods.go:126] duration metric: took 6.896666ms to wait for k8s-apps to be running ...
	I1124 09:47:17.901178   49070 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:47:17.901241   49070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:18.410648   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.027767366s)
	I1124 09:47:18.410696   49070 system_svc.go:56] duration metric: took 509.531199ms WaitForService to wait for kubelet
	I1124 09:47:18.410720   49070 kubeadm.go:587] duration metric: took 3.152426043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:18.410798   49070 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:18.412368   49070 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-960867 addons enable metrics-server
	
	I1124 09:47:18.413790   49070 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1124 09:47:14.841629   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:14.841663   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:14.909793   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:17.410153   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:17.430096   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:47:17.430190   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:47:17.462256   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:17.462278   45116 cri.go:89] found id: ""
	I1124 09:47:17.462289   45116 logs.go:282] 1 containers: [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a]
	I1124 09:47:17.462353   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.467039   45116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:47:17.467120   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:47:17.505441   45116 cri.go:89] found id: "644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:17.505470   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:17.505476   45116 cri.go:89] found id: ""
	I1124 09:47:17.505487   45116 logs.go:282] 2 containers: [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2]
	I1124 09:47:17.505550   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.510071   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.514447   45116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:47:17.514515   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:47:17.548210   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:17.548235   45116 cri.go:89] found id: ""
	I1124 09:47:17.548246   45116 logs.go:282] 1 containers: [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154]
	I1124 09:47:17.548310   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.553880   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:47:17.553961   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:47:17.586651   45116 cri.go:89] found id: "af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:17.586683   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:17.586690   45116 cri.go:89] found id: ""
	I1124 09:47:17.586700   45116 logs.go:282] 2 containers: [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560]
	I1124 09:47:17.586774   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.591298   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.597170   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:47:17.597251   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:47:17.633809   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:17.633841   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:17.633849   45116 cri.go:89] found id: ""
	I1124 09:47:17.633862   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:17.633918   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.640075   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.646199   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:17.646281   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:17.685154   45116 cri.go:89] found id: "20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:17.685201   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:17.685208   45116 cri.go:89] found id: ""
	I1124 09:47:17.685219   45116 logs.go:282] 2 containers: [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:17.685284   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.691409   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.695847   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:17.695914   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:17.733789   45116 cri.go:89] found id: ""
	I1124 09:47:17.733817   45116 logs.go:282] 0 containers: []
	W1124 09:47:17.733829   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:17.733848   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:17.733862   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:17.825744   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:17.825781   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:17.861631   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:17.861659   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:17.905178   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:17.905223   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:17.926893   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:17.926926   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:17.982775   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:17.982812   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:18.021939   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:18.021972   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:18.089811   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:18.089843   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:18.140457   45116 logs.go:123] Gathering logs for kube-controller-manager [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602] ...
	I1124 09:47:18.140493   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:18.178392   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:18.178419   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:18.513554   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:18.513596   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:18.590097   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:18.590118   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:18.590134   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:18.645549   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:18.645585   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:18.689214   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:18.689250   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:18.724771   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:18.724798   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:18.414937   49070 addons.go:530] duration metric: took 3.156602517s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1124 09:47:18.416328   49070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:18.416345   49070 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:18.416357   49070 node_conditions.go:105] duration metric: took 5.553892ms to run NodePressure ...
	I1124 09:47:18.416369   49070 start.go:242] waiting for startup goroutines ...
	I1124 09:47:18.416378   49070 start.go:247] waiting for cluster config update ...
	I1124 09:47:18.416395   49070 start.go:256] writing updated cluster config ...
	I1124 09:47:18.416656   49070 ssh_runner.go:195] Run: rm -f paused
	I1124 09:47:18.425881   49070 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:18.436418   49070 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-qjfrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:19.789544   49468 start.go:364] duration metric: took 7.462995654s to acquireMachinesLock for "no-preload-778378"
	I1124 09:47:19.789623   49468 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:47:19.789632   49468 fix.go:54] fixHost starting: 
	I1124 09:47:19.791934   49468 fix.go:112] recreateIfNeeded on no-preload-778378: state=Stopped err=<nil>
	W1124 09:47:19.791963   49468 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:47:18.283800   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.284517   49230 main.go:143] libmachine: domain embed-certs-626350 has current primary IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.284533   49230 main.go:143] libmachine: found domain IP: 192.168.61.81
	I1124 09:47:18.284540   49230 main.go:143] libmachine: reserving static IP address...
	I1124 09:47:18.285114   49230 main.go:143] libmachine: unable to find host DHCP lease matching {name: "embed-certs-626350", mac: "52:54:00:21:fc:08", ip: "192.168.61.81"} in network mk-embed-certs-626350
	I1124 09:47:18.509180   49230 main.go:143] libmachine: reserved static IP address 192.168.61.81 for domain embed-certs-626350
	I1124 09:47:18.509208   49230 main.go:143] libmachine: waiting for SSH...
	I1124 09:47:18.509217   49230 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 09:47:18.513019   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.513547   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.513572   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.513861   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.514191   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.514207   49230 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 09:47:18.631820   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:18.632210   49230 main.go:143] libmachine: domain creation complete
	I1124 09:47:18.633710   49230 machine.go:94] provisionDockerMachine start ...
	I1124 09:47:18.636441   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.636859   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.636892   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.637061   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.637367   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.637381   49230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:47:18.752906   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 09:47:18.752948   49230 buildroot.go:166] provisioning hostname "embed-certs-626350"
	I1124 09:47:18.756769   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.757281   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.757318   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.757527   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.757742   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.757755   49230 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-626350 && echo "embed-certs-626350" | sudo tee /etc/hostname
	I1124 09:47:18.890389   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-626350
	
	I1124 09:47:18.893735   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.894243   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.894268   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.894517   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.894751   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.894769   49230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-626350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-626350/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-626350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:19.014701   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:19.014749   49230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 09:47:19.014806   49230 buildroot.go:174] setting up certificates
	I1124 09:47:19.014825   49230 provision.go:84] configureAuth start
	I1124 09:47:19.017620   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.018003   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.018024   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.020335   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.020808   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.020833   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.020973   49230 provision.go:143] copyHostCerts
	I1124 09:47:19.021017   49230 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem, removing ...
	I1124 09:47:19.021033   49230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem
	I1124 09:47:19.021102   49230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 09:47:19.021236   49230 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem, removing ...
	I1124 09:47:19.021245   49230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem
	I1124 09:47:19.021275   49230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 09:47:19.021345   49230 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem, removing ...
	I1124 09:47:19.021353   49230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem
	I1124 09:47:19.021383   49230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 09:47:19.021443   49230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.embed-certs-626350 san=[127.0.0.1 192.168.61.81 embed-certs-626350 localhost minikube]
	I1124 09:47:19.078151   49230 provision.go:177] copyRemoteCerts
	I1124 09:47:19.078217   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:19.080905   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.081347   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.081374   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.081529   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.169233   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:19.198798   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 09:47:19.228847   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:19.258355   49230 provision.go:87] duration metric: took 243.499136ms to configureAuth
	I1124 09:47:19.258384   49230 buildroot.go:189] setting minikube options for container-runtime
	I1124 09:47:19.258599   49230 config.go:182] Loaded profile config "embed-certs-626350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:47:19.261825   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.262357   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.262384   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.262639   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:19.262839   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:19.262853   49230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:19.526787   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:19.526820   49230 machine.go:97] duration metric: took 893.092379ms to provisionDockerMachine
	I1124 09:47:19.526834   49230 client.go:176] duration metric: took 19.6512214s to LocalClient.Create
	I1124 09:47:19.526862   49230 start.go:167] duration metric: took 19.651290285s to libmachine.API.Create "embed-certs-626350"
	I1124 09:47:19.526877   49230 start.go:293] postStartSetup for "embed-certs-626350" (driver="kvm2")
	I1124 09:47:19.526897   49230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:19.526982   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:19.530259   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.530687   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.530726   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.530931   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.618509   49230 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:19.623597   49230 info.go:137] Remote host: Buildroot 2025.02
	I1124 09:47:19.623622   49230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 09:47:19.623682   49230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 09:47:19.623786   49230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem -> 96292.pem in /etc/ssl/certs
	I1124 09:47:19.623900   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:47:19.636104   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:19.667047   49230 start.go:296] duration metric: took 140.150925ms for postStartSetup
	I1124 09:47:19.670434   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.670919   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.670948   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.671236   49230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/config.json ...
	I1124 09:47:19.671448   49230 start.go:128] duration metric: took 19.798158784s to createHost
	I1124 09:47:19.673938   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.674398   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.674476   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.674727   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:19.674926   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:19.674936   49230 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 09:47:19.789344   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763977639.751016482
	
	I1124 09:47:19.789371   49230 fix.go:216] guest clock: 1763977639.751016482
	I1124 09:47:19.789380   49230 fix.go:229] Guest: 2025-11-24 09:47:19.751016482 +0000 UTC Remote: 2025-11-24 09:47:19.671461198 +0000 UTC m=+26.691953308 (delta=79.555284ms)
	I1124 09:47:19.789398   49230 fix.go:200] guest clock delta is within tolerance: 79.555284ms
	I1124 09:47:19.789403   49230 start.go:83] releasing machines lock for "embed-certs-626350", held for 19.916365823s
	I1124 09:47:19.792898   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.793295   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.793326   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.793884   49230 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:19.794001   49230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:19.797857   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.797936   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.798380   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.798411   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.798503   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.798549   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.798660   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.798881   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.885367   49230 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:19.924042   49230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:20.089152   49230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:47:20.098284   49230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:20.098350   49230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:20.119364   49230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:47:20.119392   49230 start.go:496] detecting cgroup driver to use...
	I1124 09:47:20.119457   49230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:20.139141   49230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:20.158478   49230 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:20.158557   49230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:20.177306   49230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:20.194720   49230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:20.355648   49230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:20.582731   49230 docker.go:234] disabling docker service ...
	I1124 09:47:20.582794   49230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:20.601075   49230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:20.621424   49230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:20.787002   49230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:20.944675   49230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:20.962188   49230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:20.985630   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:21.276908   49230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:21.277001   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.290782   49230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:21.290847   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.307918   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.326726   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.340332   49230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:21.354479   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.368743   49230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.393854   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.407231   49230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:21.420253   49230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 09:47:21.420343   49230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 09:47:21.443332   49230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:21.459149   49230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:21.664261   49230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:21.821845   49230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:21.821919   49230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:21.829623   49230 start.go:564] Will wait 60s for crictl version
	I1124 09:47:21.829693   49230 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.835015   49230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 09:47:21.878357   49230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 09:47:21.878454   49230 ssh_runner.go:195] Run: crio --version
	I1124 09:47:21.919435   49230 ssh_runner.go:195] Run: crio --version
	I1124 09:47:21.968298   49230 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1124 09:47:19.793645   49468 out.go:252] * Restarting existing kvm2 VM for "no-preload-778378" ...
	I1124 09:47:19.793695   49468 main.go:143] libmachine: starting domain...
	I1124 09:47:19.793709   49468 main.go:143] libmachine: ensuring networks are active...
	I1124 09:47:19.794921   49468 main.go:143] libmachine: Ensuring network default is active
	I1124 09:47:19.795508   49468 main.go:143] libmachine: Ensuring network mk-no-preload-778378 is active
	I1124 09:47:19.796583   49468 main.go:143] libmachine: getting domain XML...
	I1124 09:47:19.797865   49468 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-778378</name>
	  <uuid>2076f1b8-6857-452b-a9dc-78378add2d65</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/no-preload-778378.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8b:fd:5d'/>
	      <source network='mk-no-preload-778378'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e5:48:00'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 09:47:21.147488   49468 main.go:143] libmachine: waiting for domain to start...
	I1124 09:47:21.149177   49468 main.go:143] libmachine: domain is now running
	I1124 09:47:21.149198   49468 main.go:143] libmachine: waiting for IP...
	I1124 09:47:21.150043   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.150649   49468 main.go:143] libmachine: domain no-preload-778378 has current primary IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.150669   49468 main.go:143] libmachine: found domain IP: 192.168.72.119
	I1124 09:47:21.150677   49468 main.go:143] libmachine: reserving static IP address...
	I1124 09:47:21.151148   49468 main.go:143] libmachine: found host DHCP lease matching {name: "no-preload-778378", mac: "52:54:00:8b:fd:5d", ip: "192.168.72.119"} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:44:11 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:21.151213   49468 main.go:143] libmachine: skip adding static IP to network mk-no-preload-778378 - found existing host DHCP lease matching {name: "no-preload-778378", mac: "52:54:00:8b:fd:5d", ip: "192.168.72.119"}
	I1124 09:47:21.151234   49468 main.go:143] libmachine: reserved static IP address 192.168.72.119 for domain no-preload-778378
	I1124 09:47:21.151247   49468 main.go:143] libmachine: waiting for SSH...
	I1124 09:47:21.151257   49468 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 09:47:21.154085   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.154524   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:44:11 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:21.154548   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.154714   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:21.154925   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:21.154937   49468 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 09:47:21.972912   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:21.973385   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:21.973410   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:21.973774   49230 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:21.979349   49230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:22.001043   49230 kubeadm.go:884] updating cluster {Name:embed-certs-626350 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.2 ClusterName:embed-certs-626350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.81 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:22.001327   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:22.311077   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:22.600356   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:22.924604   49230 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:47:22.924763   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:21.259835   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:21.279406   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:47:21.279479   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:47:21.317875   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:21.317904   45116 cri.go:89] found id: ""
	I1124 09:47:21.317915   45116 logs.go:282] 1 containers: [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a]
	I1124 09:47:21.317983   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.324465   45116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:47:21.324549   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:47:21.366544   45116 cri.go:89] found id: "644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:21.366572   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:21.366580   45116 cri.go:89] found id: ""
	I1124 09:47:21.366591   45116 logs.go:282] 2 containers: [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2]
	I1124 09:47:21.366659   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.372428   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.376885   45116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:47:21.376951   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:47:21.415145   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:21.415186   45116 cri.go:89] found id: ""
	I1124 09:47:21.415196   45116 logs.go:282] 1 containers: [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154]
	I1124 09:47:21.415260   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.421740   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:47:21.421807   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:47:21.468682   45116 cri.go:89] found id: "af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:21.468706   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:21.468711   45116 cri.go:89] found id: ""
	I1124 09:47:21.468721   45116 logs.go:282] 2 containers: [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560]
	I1124 09:47:21.468788   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.473724   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.478444   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:47:21.478528   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:47:21.524915   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:21.524948   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:21.524955   45116 cri.go:89] found id: ""
	I1124 09:47:21.524965   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:21.525034   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.533205   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.539850   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:21.539954   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:21.582298   45116 cri.go:89] found id: "20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:21.582323   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:21.582328   45116 cri.go:89] found id: ""
	I1124 09:47:21.582337   45116 logs.go:282] 2 containers: [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:21.582402   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.587194   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.591951   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:21.592024   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:21.629236   45116 cri.go:89] found id: ""
	I1124 09:47:21.629265   45116 logs.go:282] 0 containers: []
	W1124 09:47:21.629287   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:21.629310   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:21.629336   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:21.678519   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:21.678555   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:21.736902   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:21.736949   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:21.793813   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:21.793850   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:21.915449   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:21.915496   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:22.004739   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:22.004771   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:22.024129   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:22.024169   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:22.111618   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:22.111645   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:22.111677   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:22.156847   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:22.156879   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:22.203259   45116 logs.go:123] Gathering logs for kube-controller-manager [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602] ...
	I1124 09:47:22.203295   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:22.238845   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:22.238876   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	W1124 09:47:22.274501   45116 logs.go:130] failed kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:47:22.267509   12970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist" containerID="a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	time="2025-11-24T09:47:22Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 09:47:22.267509   12970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist" containerID="a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	time="2025-11-24T09:47:22Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist"
	
	** /stderr **
	I1124 09:47:22.274534   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:22.274550   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:22.636354   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:22.636389   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:22.687742   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:22.687782   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:22.743650   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:22.743686   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	W1124 09:47:20.444964   49070 pod_ready.go:104] pod "coredns-5dd5756b68-qjfrd" is not "Ready", error: <nil>
	I1124 09:47:21.946679   49070 pod_ready.go:94] pod "coredns-5dd5756b68-qjfrd" is "Ready"
	I1124 09:47:21.946718   49070 pod_ready.go:86] duration metric: took 3.510269923s for pod "coredns-5dd5756b68-qjfrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:21.956408   49070 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:47:23.963341   49070 pod_ready.go:104] pod "etcd-old-k8s-version-960867" is not "Ready", error: <nil>
	I1124 09:47:24.216442   49468 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I1124 09:47:23.219039   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:23.504209   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:23.793505   49230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:23.832951   49230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1124 09:47:23.833038   49230 ssh_runner.go:195] Run: which lz4
	I1124 09:47:23.839070   49230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 09:47:23.845266   49230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 09:47:23.845316   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1124 09:47:25.174773   49230 crio.go:462] duration metric: took 1.335742598s to copy over tarball
	I1124 09:47:25.174848   49230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 09:47:26.894011   49230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.71912962s)
	I1124 09:47:26.894046   49230 crio.go:469] duration metric: took 1.719246192s to extract the tarball
	I1124 09:47:26.894056   49230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 09:47:26.938244   49230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:26.981460   49230 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:26.981494   49230 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:26.981502   49230 kubeadm.go:935] updating node { 192.168.61.81 8443 v1.34.2 crio true true} ...
	I1124 09:47:26.981587   49230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-626350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-626350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:26.981647   49230 ssh_runner.go:195] Run: crio config
	I1124 09:47:27.038918   49230 cni.go:84] Creating CNI manager for ""
	I1124 09:47:27.038943   49230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:27.038960   49230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:27.038994   49230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.81 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-626350 NodeName:embed-certs-626350 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:27.039138   49230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-626350"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:27.039231   49230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:47:27.052072   49230 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:27.052151   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:27.064788   49230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1124 09:47:27.088365   49230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:47:27.112857   49230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1124 09:47:27.133030   49230 ssh_runner.go:195] Run: grep 192.168.61.81	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:27.137548   49230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:27.155572   49230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:27.310364   49230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:27.346932   49230 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350 for IP: 192.168.61.81
	I1124 09:47:27.346967   49230 certs.go:195] generating shared ca certs ...
	I1124 09:47:27.346987   49230 certs.go:227] acquiring lock for ca certs: {Name:mkc847d4fb6fb61872e24a1bb00356ff9ef1a409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.347183   49230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key
	I1124 09:47:27.347228   49230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key
	I1124 09:47:27.347236   49230 certs.go:257] generating profile certs ...
	I1124 09:47:27.347295   49230 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.key
	I1124 09:47:27.347307   49230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.crt with IP's: []
	I1124 09:47:27.636103   49230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.crt ...
	I1124 09:47:27.636131   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.crt: {Name:mk340735111201655131ce5d89db6955bfd8290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.643441   49230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.key ...
	I1124 09:47:27.643491   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.key: {Name:mkf258701eb8ed1624ff1812815a2c975bcca668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.643645   49230 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c
	I1124 09:47:27.643666   49230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.81]
	I1124 09:47:27.744265   49230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c ...
	I1124 09:47:27.744293   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c: {Name:mk8ea75d55b15efe354dba9875eb752d0a05347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.744488   49230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c ...
	I1124 09:47:27.744506   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c: {Name:mk58d5d689d1c5e5243e47b705f1e000fdb59d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.744609   49230 certs.go:382] copying /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c -> /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt
	I1124 09:47:27.744685   49230 certs.go:386] copying /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c -> /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key
	I1124 09:47:27.744739   49230 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key
	I1124 09:47:27.744766   49230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt with IP's: []
	I1124 09:47:27.761784   49230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt ...
	I1124 09:47:27.761811   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt: {Name:mk4f44d39bc0c8806d7120167d989468c7835a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.761997   49230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key ...
	I1124 09:47:27.762014   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key: {Name:mk6c8b61277ff226ad7caaff39303caa33ed0c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.762225   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem (1338 bytes)
	W1124 09:47:27.762266   49230 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:27.762278   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:47:27.762301   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:27.762324   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:27.762348   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:27.762387   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:27.762896   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:27.799260   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:27.835181   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:27.871331   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:27.909219   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:47:27.959186   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:25.285986   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:25.310454   45116 kubeadm.go:602] duration metric: took 4m14.294214023s to restartPrimaryControlPlane
	W1124 09:47:25.310536   45116 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:47:25.310655   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W1124 09:47:25.964788   49070 pod_ready.go:104] pod "etcd-old-k8s-version-960867" is not "Ready", error: <nil>
	W1124 09:47:28.310274   49070 pod_ready.go:104] pod "etcd-old-k8s-version-960867" is not "Ready", error: <nil>
	I1124 09:47:30.169294   45116 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.858611236s)
	I1124 09:47:30.169392   45116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:30.192823   45116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:47:30.207816   45116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:47:30.227083   45116 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:47:30.227105   45116 kubeadm.go:158] found existing configuration files:
	
	I1124 09:47:30.227187   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:47:30.245334   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:47:30.245402   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:47:30.262454   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:47:30.281200   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:47:30.281442   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:47:30.300737   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:47:30.319229   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:47:30.319299   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:47:30.337698   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:47:30.357706   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:47:30.357777   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:47:30.379012   45116 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 09:47:30.450856   45116 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:47:30.450934   45116 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:47:30.599580   45116 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:47:30.599741   45116 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:47:30.599882   45116 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:47:30.611941   45116 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:47:29.969460   49070 pod_ready.go:94] pod "etcd-old-k8s-version-960867" is "Ready"
	I1124 09:47:29.969495   49070 pod_ready.go:86] duration metric: took 8.013057466s for pod "etcd-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:29.980910   49070 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:29.991545   49070 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-960867" is "Ready"
	I1124 09:47:29.991573   49070 pod_ready.go:86] duration metric: took 10.63769ms for pod "kube-apiserver-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.001331   49070 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.024156   49070 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-960867" is "Ready"
	I1124 09:47:30.024205   49070 pod_ready.go:86] duration metric: took 22.84268ms for pod "kube-controller-manager-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.038886   49070 pod_ready.go:83] waiting for pod "kube-proxy-lmg4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.267707   49070 pod_ready.go:94] pod "kube-proxy-lmg4n" is "Ready"
	I1124 09:47:30.267743   49070 pod_ready.go:86] duration metric: took 228.825451ms for pod "kube-proxy-lmg4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.469447   49070 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.866953   49070 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-960867" is "Ready"
	I1124 09:47:30.866993   49070 pod_ready.go:86] duration metric: took 397.512223ms for pod "kube-scheduler-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.867013   49070 pod_ready.go:40] duration metric: took 12.44109528s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:30.931711   49070 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:47:30.933482   49070 out.go:203] 
	W1124 09:47:30.934896   49070 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:47:30.936033   49070 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:47:30.937400   49070 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-960867" cluster and "default" namespace by default
	I1124 09:47:30.615363   45116 out.go:252]   - Generating certificates and keys ...
	I1124 09:47:30.615501   45116 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:47:30.615596   45116 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:47:30.615704   45116 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:47:30.615769   45116 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:47:30.615845   45116 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:47:30.615915   45116 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:47:30.615991   45116 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:47:30.616069   45116 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:47:30.616178   45116 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:47:30.616293   45116 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:47:30.616353   45116 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:47:30.616430   45116 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:47:30.710466   45116 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:47:30.889597   45116 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:47:31.280869   45116 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:47:31.529347   45116 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:47:31.804508   45116 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:47:31.805285   45116 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:47:31.809182   45116 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:47:30.296419   49468 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I1124 09:47:28.043772   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:28.174421   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:47:28.212152   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /usr/share/ca-certificates/96292.pem (1708 bytes)
	I1124 09:47:28.250367   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:28.284602   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem --> /usr/share/ca-certificates/9629.pem (1338 bytes)
	I1124 09:47:28.323091   49230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:28.344189   49230 ssh_runner.go:195] Run: openssl version
	I1124 09:47:28.351343   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96292.pem && ln -fs /usr/share/ca-certificates/96292.pem /etc/ssl/certs/96292.pem"
	I1124 09:47:28.366811   49230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96292.pem
	I1124 09:47:28.372317   49230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:42 /usr/share/ca-certificates/96292.pem
	I1124 09:47:28.372387   49230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96292.pem
	I1124 09:47:28.379880   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:28.397799   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:28.412722   49230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:28.419873   49230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:28.419941   49230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:28.427569   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:28.443947   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9629.pem && ln -fs /usr/share/ca-certificates/9629.pem /etc/ssl/certs/9629.pem"
	I1124 09:47:28.458562   49230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9629.pem
	I1124 09:47:28.463812   49230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:42 /usr/share/ca-certificates/9629.pem
	I1124 09:47:28.463873   49230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9629.pem
	I1124 09:47:28.471104   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9629.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:28.485441   49230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:28.490493   49230 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:47:28.490557   49230 kubeadm.go:401] StartCluster: {Name:embed-certs-626350 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:embed-certs-626350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.81 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:28.490633   49230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:28.490706   49230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:28.526176   49230 cri.go:89] found id: ""
	I1124 09:47:28.526261   49230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:28.538731   49230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:47:28.556711   49230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:47:28.572053   49230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:47:28.572093   49230 kubeadm.go:158] found existing configuration files:
	
	I1124 09:47:28.572147   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:47:28.587240   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:47:28.587313   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:47:28.604867   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:47:28.620518   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:47:28.620588   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:47:28.637209   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:47:28.652740   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:47:28.652816   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:47:28.669669   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:47:28.685604   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:47:28.685678   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:47:28.697996   49230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 09:47:28.905963   49230 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:47:31.814094   45116 out.go:252]   - Booting up control plane ...
	I1124 09:47:31.814296   45116 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:47:31.814460   45116 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:47:31.814610   45116 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:47:31.846212   45116 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:47:31.846503   45116 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:47:31.857349   45116 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:47:31.857574   45116 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:47:31.857680   45116 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:47:32.091015   45116 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:47:32.091227   45116 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:47:32.603864   45116 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 511.088713ms
	I1124 09:47:32.608667   45116 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:47:32.609024   45116 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.144:8443/livez
	I1124 09:47:32.609268   45116 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:47:32.609378   45116 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:47:33.297474   49468 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: connection refused
	I1124 09:47:36.426245   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:36.431569   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.432272   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.432307   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.432575   49468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/config.json ...
	I1124 09:47:36.432840   49468 machine.go:94] provisionDockerMachine start ...
	I1124 09:47:36.436142   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.436610   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.436646   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.437023   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:36.437334   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:36.437351   49468 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:47:36.560509   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 09:47:36.560558   49468 buildroot.go:166] provisioning hostname "no-preload-778378"
	I1124 09:47:36.564819   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.565392   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.565427   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.565689   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:36.565977   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:36.566008   49468 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-778378 && echo "no-preload-778378" | sudo tee /etc/hostname
	I1124 09:47:36.718605   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-778378
	
	I1124 09:47:36.722692   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.723181   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.723266   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.723614   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:36.723903   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:36.723928   49468 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778378' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778378/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778378' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:36.856289   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:36.856323   49468 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 09:47:36.856354   49468 buildroot.go:174] setting up certificates
	I1124 09:47:36.856373   49468 provision.go:84] configureAuth start
	I1124 09:47:36.861471   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.861921   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.861950   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.865346   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.865793   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.865823   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.865982   49468 provision.go:143] copyHostCerts
	I1124 09:47:36.866039   49468 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem, removing ...
	I1124 09:47:36.866057   49468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem
	I1124 09:47:36.866128   49468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 09:47:36.866301   49468 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem, removing ...
	I1124 09:47:36.866315   49468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem
	I1124 09:47:36.866352   49468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 09:47:36.866440   49468 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem, removing ...
	I1124 09:47:36.866450   49468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem
	I1124 09:47:36.866478   49468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 09:47:36.866553   49468 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.no-preload-778378 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-778378]
	I1124 09:47:37.079398   49468 provision.go:177] copyRemoteCerts
	I1124 09:47:37.079469   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:37.082568   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.083059   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.083087   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.083393   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:37.175510   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:37.218719   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:37.259590   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:47:36.511991   45116 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.903315485s
	I1124 09:47:37.385036   45116 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.776583383s
	I1124 09:47:39.612514   45116 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003630831s
	I1124 09:47:39.637176   45116 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:47:39.655928   45116 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:47:39.673008   45116 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:47:39.673307   45116 kubeadm.go:319] [mark-control-plane] Marking the node pause-377882 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:47:39.689759   45116 kubeadm.go:319] [bootstrap-token] Using token: dkfba4.wiyzaabyuc92dy77
	I1124 09:47:37.303189   49468 provision.go:87] duration metric: took 446.792901ms to configureAuth
	I1124 09:47:37.303220   49468 buildroot.go:189] setting minikube options for container-runtime
	I1124 09:47:37.303468   49468 config.go:182] Loaded profile config "no-preload-778378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:37.307947   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.308593   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.308683   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.309192   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:37.309697   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:37.309777   49468 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:37.654536   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:37.654657   49468 machine.go:97] duration metric: took 1.221731659s to provisionDockerMachine
	I1124 09:47:37.654684   49468 start.go:293] postStartSetup for "no-preload-778378" (driver="kvm2")
	I1124 09:47:37.654701   49468 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:37.654784   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:37.658780   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.659326   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.659368   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.659753   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:37.748941   49468 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:37.754891   49468 info.go:137] Remote host: Buildroot 2025.02
	I1124 09:47:37.754924   49468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 09:47:37.755011   49468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 09:47:37.755121   49468 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem -> 96292.pem in /etc/ssl/certs
	I1124 09:47:37.755290   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:47:37.769292   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:37.805342   49468 start.go:296] duration metric: took 150.641101ms for postStartSetup
	I1124 09:47:37.805401   49468 fix.go:56] duration metric: took 18.015768823s for fixHost
	I1124 09:47:37.808800   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.809270   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.809331   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.809584   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:37.809861   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:37.809878   49468 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 09:47:37.923307   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763977657.869488968
	
	I1124 09:47:37.923335   49468 fix.go:216] guest clock: 1763977657.869488968
	I1124 09:47:37.923345   49468 fix.go:229] Guest: 2025-11-24 09:47:37.869488968 +0000 UTC Remote: 2025-11-24 09:47:37.80540708 +0000 UTC m=+25.610301982 (delta=64.081888ms)
	I1124 09:47:37.923363   49468 fix.go:200] guest clock delta is within tolerance: 64.081888ms
	I1124 09:47:37.923369   49468 start.go:83] releasing machines lock for "no-preload-778378", held for 18.133776137s
	I1124 09:47:37.927122   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.927625   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.927678   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.928253   49468 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:37.928303   49468 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:37.932007   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932213   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932513   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.932549   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932576   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.932600   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932734   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:37.932920   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:38.016517   49468 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:38.051815   49468 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:38.216433   49468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:47:38.228075   49468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:38.228180   49468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:38.257370   49468 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:47:38.257398   49468 start.go:496] detecting cgroup driver to use...
	I1124 09:47:38.257468   49468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:38.288592   49468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:38.314406   49468 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:38.314503   49468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:38.342931   49468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:38.367024   49468 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:38.550224   49468 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:38.799542   49468 docker.go:234] disabling docker service ...
	I1124 09:47:38.799618   49468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:38.828617   49468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:38.858526   49468 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:39.109058   49468 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:39.337875   49468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:39.370317   49468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:39.405202   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:39.716591   49468 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:39.716684   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.733646   49468 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:39.733737   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.749322   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.762122   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.776172   49468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:39.793655   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.808258   49468 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.837251   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.851250   49468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:39.866186   49468 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 09:47:39.866250   49468 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 09:47:39.893140   49468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:39.907382   49468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:40.073077   49468 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:40.230279   49468 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:40.230392   49468 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:40.238673   49468 start.go:564] Will wait 60s for crictl version
	I1124 09:47:40.238759   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:40.245716   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 09:47:40.299447   49468 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 09:47:40.299569   49468 ssh_runner.go:195] Run: crio --version
	I1124 09:47:40.343754   49468 ssh_runner.go:195] Run: crio --version
	I1124 09:47:40.386006   49468 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1124 09:47:39.692299   45116 out.go:252]   - Configuring RBAC rules ...
	I1124 09:47:39.692438   45116 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:47:39.701326   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:47:39.711822   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:47:39.717209   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:47:39.727232   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:47:39.733694   45116 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:47:40.108285   45116 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:47:40.719586   45116 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:47:41.020955   45116 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:47:41.022360   45116 kubeadm.go:319] 
	I1124 09:47:41.022508   45116 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:47:41.022527   45116 kubeadm.go:319] 
	I1124 09:47:41.022663   45116 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:47:41.022708   45116 kubeadm.go:319] 
	I1124 09:47:41.022785   45116 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:47:41.022895   45116 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:47:41.022965   45116 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:47:41.022971   45116 kubeadm.go:319] 
	I1124 09:47:41.023042   45116 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:47:41.023053   45116 kubeadm.go:319] 
	I1124 09:47:41.023138   45116 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:47:41.023173   45116 kubeadm.go:319] 
	I1124 09:47:41.023243   45116 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:47:41.023376   45116 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:47:41.023503   45116 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:47:41.023511   45116 kubeadm.go:319] 
	I1124 09:47:41.023613   45116 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:47:41.023742   45116 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:47:41.023754   45116 kubeadm.go:319] 
	I1124 09:47:41.023880   45116 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dkfba4.wiyzaabyuc92dy77 \
	I1124 09:47:41.024030   45116 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa \
	I1124 09:47:41.024076   45116 kubeadm.go:319] 	--control-plane 
	I1124 09:47:41.024086   45116 kubeadm.go:319] 
	I1124 09:47:41.024209   45116 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:47:41.024221   45116 kubeadm.go:319] 
	I1124 09:47:41.024326   45116 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dkfba4.wiyzaabyuc92dy77 \
	I1124 09:47:41.024467   45116 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa 
	I1124 09:47:41.027844   45116 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:47:41.027874   45116 cni.go:84] Creating CNI manager for ""
	I1124 09:47:41.027883   45116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:41.030429   45116 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:47:40.391437   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:40.392027   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:40.392050   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:40.392304   49468 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:40.399318   49468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:40.422415   49468 kubeadm.go:884] updating cluster {Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:40.422684   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:40.719627   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:41.018605   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:41.322596   49468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:41.322687   49468 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:41.368278   49468 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:47:41.368310   49468 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:47:41.368379   49468 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:41.368397   49468 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.368411   49468 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.368431   49468 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.368455   49468 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:47:41.368474   49468 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.368493   49468 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.368549   49468 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:47:41.370299   49468 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.370479   49468 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.370657   49468 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:41.370799   49468 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.370907   49468 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.371101   49468 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:47:41.371232   49468 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:47:41.371319   49468 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.659265   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 09:47:41.661115   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.670704   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.675074   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.690818   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.695978   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.703708   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 09:47:41.947377   49468 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:47:41.947428   49468 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:47:41.947482   49468 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:47:41.947503   49468 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.947564   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947436   49468 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.947638   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947565   49468 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:47:41.947673   49468 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.947735   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947501   49468 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.947784   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947390   49468 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:47:41.947848   49468 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.947879   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:42.030556   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:42.030593   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:42.030616   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:42.030558   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:42.030647   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:42.128639   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:42.128702   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:42.133613   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:42.133726   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:42.133867   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:42.224745   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:42.224756   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:42.236400   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:42.236519   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:42.246691   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:42.842156   49230 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:47:42.842256   49230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:47:42.842381   49230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:47:42.842524   49230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:47:42.842641   49230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:47:42.842711   49230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:47:42.844791   49230 out.go:252]   - Generating certificates and keys ...
	I1124 09:47:42.844876   49230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:47:42.844953   49230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:47:42.845069   49230 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:47:42.845174   49230 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:47:42.845276   49230 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:47:42.845344   49230 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:47:42.845414   49230 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:47:42.845573   49230 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-626350 localhost] and IPs [192.168.61.81 127.0.0.1 ::1]
	I1124 09:47:42.845642   49230 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:47:42.845811   49230 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-626350 localhost] and IPs [192.168.61.81 127.0.0.1 ::1]
	I1124 09:47:42.845903   49230 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:47:42.846005   49230 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:47:42.846065   49230 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:47:42.846193   49230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:47:42.846272   49230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:47:42.846365   49230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:47:42.846462   49230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:47:42.846561   49230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:47:42.846648   49230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:47:42.846774   49230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:47:42.846868   49230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:47:42.848841   49230 out.go:252]   - Booting up control plane ...
	I1124 09:47:42.848974   49230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:47:42.849105   49230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:47:42.849241   49230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:47:42.849429   49230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:47:42.849592   49230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:47:42.849801   49230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:47:42.849964   49230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:47:42.850017   49230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:47:42.850196   49230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:47:42.850370   49230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:47:42.850467   49230 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501804784s
	I1124 09:47:42.850617   49230 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:47:42.850735   49230 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.81:8443/livez
	I1124 09:47:42.850875   49230 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:47:42.851008   49230 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:47:42.851119   49230 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.542586482s
	I1124 09:47:42.851231   49230 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.485477469s
	I1124 09:47:42.851357   49230 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503199752s
	I1124 09:47:42.851501   49230 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:47:42.851657   49230 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:47:42.851729   49230 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:47:42.851942   49230 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-626350 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:47:42.852007   49230 kubeadm.go:319] [bootstrap-token] Using token: 10o6xo.r4t1k3a5ac1zo35l
	I1124 09:47:42.855290   49230 out.go:252]   - Configuring RBAC rules ...
	I1124 09:47:42.855388   49230 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:47:42.855463   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:47:42.855601   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:47:42.855713   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:47:42.855844   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:47:42.855965   49230 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:47:42.856117   49230 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:47:42.856187   49230 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:47:42.856235   49230 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:47:42.856241   49230 kubeadm.go:319] 
	I1124 09:47:42.856310   49230 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:47:42.856321   49230 kubeadm.go:319] 
	I1124 09:47:42.856417   49230 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:47:42.856424   49230 kubeadm.go:319] 
	I1124 09:47:42.856443   49230 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:47:42.856532   49230 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:47:42.856616   49230 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:47:42.856630   49230 kubeadm.go:319] 
	I1124 09:47:42.856704   49230 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:47:42.856718   49230 kubeadm.go:319] 
	I1124 09:47:42.856776   49230 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:47:42.856790   49230 kubeadm.go:319] 
	I1124 09:47:42.856857   49230 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:47:42.856957   49230 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:47:42.857062   49230 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:47:42.857071   49230 kubeadm.go:319] 
	I1124 09:47:42.857201   49230 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:47:42.857306   49230 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:47:42.857315   49230 kubeadm.go:319] 
	I1124 09:47:42.857421   49230 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 10o6xo.r4t1k3a5ac1zo35l \
	I1124 09:47:42.857560   49230 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa \
	I1124 09:47:42.857594   49230 kubeadm.go:319] 	--control-plane 
	I1124 09:47:42.857602   49230 kubeadm.go:319] 
	I1124 09:47:42.857667   49230 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:47:42.857674   49230 kubeadm.go:319] 
	I1124 09:47:42.857744   49230 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 10o6xo.r4t1k3a5ac1zo35l \
	I1124 09:47:42.857845   49230 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa 
	I1124 09:47:42.857857   49230 cni.go:84] Creating CNI manager for ""
	I1124 09:47:42.857865   49230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:42.859407   49230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:47:42.860712   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:47:42.875098   49230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:47:42.901014   49230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:47:42.901107   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:42.901145   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-626350 minikube.k8s.io/updated_at=2025_11_24T09_47_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-626350 minikube.k8s.io/primary=true
	I1124 09:47:42.961791   49230 ops.go:34] apiserver oom_adj: -16
	I1124 09:47:41.032326   45116 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:47:41.052917   45116 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:47:41.088139   45116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:47:41.088210   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:41.088292   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-377882 minikube.k8s.io/updated_at=2025_11_24T09_47_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=pause-377882 minikube.k8s.io/primary=true
	I1124 09:47:41.260728   45116 ops.go:34] apiserver oom_adj: -16
	I1124 09:47:41.260737   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:41.760896   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:42.260939   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:42.761682   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:43.261592   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:43.761513   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.260802   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.761638   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.260814   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.761485   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.859385   45116 kubeadm.go:1114] duration metric: took 4.771240887s to wait for elevateKubeSystemPrivileges
	I1124 09:47:45.859419   45116 kubeadm.go:403] duration metric: took 4m35.037363705s to StartCluster
	I1124 09:47:45.859439   45116 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:45.859538   45116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:45.860786   45116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:45.861117   45116 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:45.861342   45116 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:45.861474   45116 config.go:182] Loaded profile config "pause-377882": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:47:45.863498   45116 out.go:179] * Enabled addons: 
	I1124 09:47:45.863505   45116 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:42.318580   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.318696   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.329403   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:47:42.329527   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:47:42.341584   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:47:42.341594   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:42.341699   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:47:42.341807   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:42.352035   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:42.352094   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (exists)
	I1124 09:47:42.352117   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.352146   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:42.352175   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.353408   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (exists)
	I1124 09:47:42.355304   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (exists)
	I1124 09:47:42.355331   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.13.1 (exists)
	I1124 09:47:42.679799   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:45.143630   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.791429216s)
	I1124 09:47:45.143660   49468 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.791498276s)
	I1124 09:47:45.143687   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (exists)
	I1124 09:47:45.143667   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:47:45.143713   49468 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.463882545s)
	I1124 09:47:45.143750   49468 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:47:45.143719   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:47:45.143782   49468 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:45.143824   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:47:45.143834   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:43.080379   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:43.580822   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.081393   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.580960   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.081448   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.581018   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:46.080976   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:46.580763   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:47.081426   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:47.580758   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:47.692366   49230 kubeadm.go:1114] duration metric: took 4.791324343s to wait for elevateKubeSystemPrivileges
	I1124 09:47:47.692413   49230 kubeadm.go:403] duration metric: took 19.201859766s to StartCluster
	I1124 09:47:47.692438   49230 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:47.692533   49230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:47.694146   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:47.694433   49230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:47:47.694432   49230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.81 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:47.694518   49230 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:47.694709   49230 config.go:182] Loaded profile config "embed-certs-626350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:47:47.694749   49230 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-626350"
	I1124 09:47:47.694763   49230 addons.go:70] Setting default-storageclass=true in profile "embed-certs-626350"
	I1124 09:47:47.694777   49230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-626350"
	I1124 09:47:47.694778   49230 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-626350"
	I1124 09:47:47.694812   49230 host.go:66] Checking if "embed-certs-626350" exists ...
	I1124 09:47:47.696279   49230 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:47.697728   49230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:47.697913   49230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:47.699136   49230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:47.699153   49230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:47.699664   49230 addons.go:239] Setting addon default-storageclass=true in "embed-certs-626350"
	I1124 09:47:47.699708   49230 host.go:66] Checking if "embed-certs-626350" exists ...
	I1124 09:47:47.702071   49230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:47.702092   49230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:47.702479   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.703033   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:47.703075   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.703456   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:47.705068   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.705631   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:47.705668   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.705859   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:48.001103   49230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:47:48.095048   49230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:48.215604   49230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:48.544411   49230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:48.863307   49230 start.go:977] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1124 09:47:48.864754   49230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-626350" to be "Ready" ...
	I1124 09:47:48.888999   49230 node_ready.go:49] node "embed-certs-626350" is "Ready"
	I1124 09:47:48.889033   49230 node_ready.go:38] duration metric: took 24.243987ms for node "embed-certs-626350" to be "Ready" ...
	I1124 09:47:48.889049   49230 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:48.889104   49230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:49.348078   49230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132429809s)
	I1124 09:47:49.348231   49230 api_server.go:72] duration metric: took 1.653769835s to wait for apiserver process to appear ...
	I1124 09:47:49.348254   49230 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:49.348279   49230 api_server.go:253] Checking apiserver healthz at https://192.168.61.81:8443/healthz ...
	I1124 09:47:49.386277   49230 api_server.go:279] https://192.168.61.81:8443/healthz returned 200:
	ok
	I1124 09:47:49.388026   49230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-626350" context rescaled to 1 replicas
	I1124 09:47:49.389837   49230 api_server.go:141] control plane version: v1.34.2
	I1124 09:47:49.389866   49230 api_server.go:131] duration metric: took 41.604231ms to wait for apiserver health ...
	I1124 09:47:49.389876   49230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:49.400036   49230 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:47:45.865101   45116 addons.go:530] duration metric: took 3.766128ms for enable addons: enabled=[]
	I1124 09:47:45.865172   45116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:46.084467   45116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:46.115070   45116 node_ready.go:35] waiting up to 6m0s for node "pause-377882" to be "Ready" ...
	I1124 09:47:46.128184   45116 node_ready.go:49] node "pause-377882" is "Ready"
	I1124 09:47:46.128215   45116 node_ready.go:38] duration metric: took 13.106973ms for node "pause-377882" to be "Ready" ...
	I1124 09:47:46.128231   45116 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:46.128285   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:46.156705   45116 api_server.go:72] duration metric: took 295.54891ms to wait for apiserver process to appear ...
	I1124 09:47:46.156743   45116 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:46.156767   45116 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I1124 09:47:46.165521   45116 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I1124 09:47:46.167551   45116 api_server.go:141] control plane version: v1.34.2
	I1124 09:47:46.167657   45116 api_server.go:131] duration metric: took 10.90363ms to wait for apiserver health ...
	I1124 09:47:46.167691   45116 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:46.176866   45116 system_pods.go:59] 4 kube-system pods found
	I1124 09:47:46.176906   45116 system_pods.go:61] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.176916   45116 system_pods.go:61] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.176925   45116 system_pods.go:61] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.176933   45116 system_pods.go:61] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.176942   45116 system_pods.go:74] duration metric: took 9.209601ms to wait for pod list to return data ...
	I1124 09:47:46.176952   45116 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:47:46.182616   45116 default_sa.go:45] found service account: "default"
	I1124 09:47:46.182646   45116 default_sa.go:55] duration metric: took 5.686595ms for default service account to be created ...
	I1124 09:47:46.182661   45116 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:47:46.272960   45116 system_pods.go:86] 4 kube-system pods found
	I1124 09:47:46.272991   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.273009   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.273016   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.273021   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.273062   45116 retry.go:31] will retry after 261.779199ms: missing components: kube-dns, kube-proxy
	I1124 09:47:46.547537   45116 system_pods.go:86] 5 kube-system pods found
	I1124 09:47:46.547572   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.547586   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.547595   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.547606   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:46.547612   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.547634   45116 retry.go:31] will retry after 284.613792ms: missing components: kube-dns, kube-proxy
	I1124 09:47:46.881884   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:46.881922   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:46.881933   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:46.881943   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.881952   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.881958   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.881965   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:46.881971   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.881990   45116 retry.go:31] will retry after 315.292616ms: missing components: kube-dns, kube-proxy
	I1124 09:47:47.208548   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:47.208592   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.208607   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.208620   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:47.208628   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:47.208640   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:47.208649   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:47.208659   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:47.208678   45116 retry.go:31] will retry after 507.727708ms: missing components: kube-dns, kube-proxy
	I1124 09:47:47.733665   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:47.733703   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.733724   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.733733   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:47.733740   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:47.733746   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:47.733756   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:47.733766   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:47.733794   45116 retry.go:31] will retry after 507.400196ms: missing components: kube-dns, kube-proxy
	I1124 09:47:48.246556   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:48.246607   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:48.246624   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:48.246637   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:48.246650   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:48.246658   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:48.246665   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Running
	I1124 09:47:48.246671   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:48.246691   45116 retry.go:31] will retry after 799.242365ms: missing components: kube-dns
	I1124 09:47:49.051374   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:49.051403   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.051411   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.051419   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.051423   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:49.051428   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:49.051432   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Running
	I1124 09:47:49.051436   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:49.051446   45116 system_pods.go:126] duration metric: took 2.868778011s to wait for k8s-apps to be running ...
	I1124 09:47:49.051456   45116 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:47:49.051512   45116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:49.074813   45116 system_svc.go:56] duration metric: took 23.348137ms WaitForService to wait for kubelet
	I1124 09:47:49.074847   45116 kubeadm.go:587] duration metric: took 3.213697828s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:49.074863   45116 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:49.078784   45116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:49.078820   45116 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:49.078834   45116 node_conditions.go:105] duration metric: took 3.966011ms to run NodePressure ...
	I1124 09:47:49.078849   45116 start.go:242] waiting for startup goroutines ...
	I1124 09:47:49.078860   45116 start.go:247] waiting for cluster config update ...
	I1124 09:47:49.078872   45116 start.go:256] writing updated cluster config ...
	I1124 09:47:49.079272   45116 ssh_runner.go:195] Run: rm -f paused
	I1124 09:47:49.084914   45116 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:49.086200   45116 kapi.go:59] client config for pause-377882: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:49.090408   45116 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fzcps" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:47.297860   49468 ssh_runner.go:235] Completed: which crictl: (2.154005231s)
	I1124 09:47:47.297920   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (2.154068188s)
	I1124 09:47:47.297941   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:47.297946   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:47:47.297973   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:47.298019   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:48.873034   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.574992591s)
	I1124 09:47:48.873075   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:47:48.873102   49468 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:47:48.873155   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:47:48.873223   49468 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.575268548s)
	I1124 09:47:48.873287   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:50.770204   49468 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.896891782s)
	I1124 09:47:50.770288   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:50.770310   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.897126269s)
	I1124 09:47:50.770325   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:47:50.770353   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:50.770385   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:51.835648   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.065240655s)
	I1124 09:47:51.835686   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:47:51.835689   49468 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.065377432s)
	I1124 09:47:51.835741   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:47:51.835843   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:47:51.841904   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1124 09:47:51.841930   49468 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:47:51.841970   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:47:49.401277   49230 addons.go:530] duration metric: took 1.706757046s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:47:49.406238   49230 system_pods.go:59] 8 kube-system pods found
	I1124 09:47:49.406288   49230 system_pods.go:61] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.406314   49230 system_pods.go:61] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.406324   49230 system_pods.go:61] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.406334   49230 system_pods.go:61] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:49.406343   49230 system_pods.go:61] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:49.406355   49230 system_pods.go:61] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:49.406363   49230 system_pods.go:61] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:49.406370   49230 system_pods.go:61] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending
	I1124 09:47:49.406378   49230 system_pods.go:74] duration metric: took 16.495786ms to wait for pod list to return data ...
	I1124 09:47:49.406390   49230 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:47:49.417315   49230 default_sa.go:45] found service account: "default"
	I1124 09:47:49.417356   49230 default_sa.go:55] duration metric: took 10.955817ms for default service account to be created ...
	I1124 09:47:49.417370   49230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:47:49.426259   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:49.426297   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.426322   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.426332   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.426339   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:49.426352   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:49.426364   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:49.426375   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:49.426387   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:49.426412   49230 retry.go:31] will retry after 264.613707ms: missing components: kube-dns, kube-proxy
	I1124 09:47:49.715652   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:49.715688   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.715699   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.715708   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.715725   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:49.715743   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:49.715749   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:49.715758   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:49.715770   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:49.715791   49230 retry.go:31] will retry after 350.158621ms: missing components: kube-dns
	I1124 09:47:50.071130   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:50.071186   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.071219   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.071244   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:50.071252   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:50.071263   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:50.071279   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:50.071291   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:50.071299   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:50.071318   49230 retry.go:31] will retry after 375.919932ms: missing components: kube-dns
	I1124 09:47:50.451626   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:50.451663   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.451674   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.451689   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:50.451697   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:50.451707   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:50.451720   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:50.451733   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:50.451742   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:50.451767   49230 retry.go:31] will retry after 606.729657ms: missing components: kube-dns
	I1124 09:47:51.064644   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:51.064682   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:51.064690   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:51.064697   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:51.064701   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:51.064709   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running
	I1124 09:47:51.064713   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:51.064718   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:51.064721   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Running
	I1124 09:47:51.064730   49230 system_pods.go:126] duration metric: took 1.6473536s to wait for k8s-apps to be running ...
	I1124 09:47:51.064749   49230 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:47:51.064794   49230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:51.083878   49230 system_svc.go:56] duration metric: took 19.11715ms WaitForService to wait for kubelet
	I1124 09:47:51.083909   49230 kubeadm.go:587] duration metric: took 3.389449545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:51.083931   49230 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:51.087934   49230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:51.087962   49230 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:51.087977   49230 node_conditions.go:105] duration metric: took 4.040359ms to run NodePressure ...
	I1124 09:47:51.087994   49230 start.go:242] waiting for startup goroutines ...
	I1124 09:47:51.088007   49230 start.go:247] waiting for cluster config update ...
	I1124 09:47:51.088021   49230 start.go:256] writing updated cluster config ...
	I1124 09:47:51.088385   49230 ssh_runner.go:195] Run: rm -f paused
	I1124 09:47:51.094717   49230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:51.099199   49230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g85rx" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:47:51.096384   45116 pod_ready.go:104] pod "coredns-66bc5c9577-fzcps" is not "Ready", error: <nil>
	W1124 09:47:53.099368   45116 pod_ready.go:104] pod "coredns-66bc5c9577-fzcps" is not "Ready", error: <nil>
	I1124 09:47:54.598857   45116 pod_ready.go:94] pod "coredns-66bc5c9577-fzcps" is "Ready"
	I1124 09:47:54.598881   45116 pod_ready.go:86] duration metric: took 5.508446462s for pod "coredns-66bc5c9577-fzcps" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.598893   45116 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t7vnl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.603960   45116 pod_ready.go:94] pod "coredns-66bc5c9577-t7vnl" is "Ready"
	I1124 09:47:54.603987   45116 pod_ready.go:86] duration metric: took 5.086956ms for pod "coredns-66bc5c9577-t7vnl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.609473   45116 pod_ready.go:83] waiting for pod "etcd-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.615781   45116 pod_ready.go:94] pod "etcd-pause-377882" is "Ready"
	I1124 09:47:54.615808   45116 pod_ready.go:86] duration metric: took 6.310805ms for pod "etcd-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.617933   45116 pod_ready.go:83] waiting for pod "kube-apiserver-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.623895   45116 pod_ready.go:94] pod "kube-apiserver-pause-377882" is "Ready"
	I1124 09:47:54.623915   45116 pod_ready.go:86] duration metric: took 5.95122ms for pod "kube-apiserver-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.796235   45116 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.197072   45116 pod_ready.go:94] pod "kube-controller-manager-pause-377882" is "Ready"
	I1124 09:47:55.197111   45116 pod_ready.go:86] duration metric: took 400.844861ms for pod "kube-controller-manager-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.396347   45116 pod_ready.go:83] waiting for pod "kube-proxy-c42hb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.794688   45116 pod_ready.go:94] pod "kube-proxy-c42hb" is "Ready"
	I1124 09:47:55.794716   45116 pod_ready.go:86] duration metric: took 398.334501ms for pod "kube-proxy-c42hb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.995859   45116 pod_ready.go:83] waiting for pod "kube-scheduler-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:56.396431   45116 pod_ready.go:94] pod "kube-scheduler-pause-377882" is "Ready"
	I1124 09:47:56.396467   45116 pod_ready.go:86] duration metric: took 400.583645ms for pod "kube-scheduler-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:56.396485   45116 pod_ready.go:40] duration metric: took 7.311533621s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:56.462498   45116 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:47:56.464714   45116 out.go:179] * Done! kubectl is now configured to use "pause-377882" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.253097325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977677253072479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4f92208-0482-42a7-9011-5655d4eab4bb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.254258229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81c04378-1012-498a-af88-1c9ce0a8101c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.254351879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81c04378-1012-498a-af88-1c9ce0a8101c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.254651358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81c04378-1012-498a-af88-1c9ce0a8101c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.300553458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68c91cc4-9697-4cd3-b548-809452987339 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.300662918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68c91cc4-9697-4cd3-b548-809452987339 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.302992086Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dad6e460-306d-4120-b4c3-7b2f37f7cb13 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.303974102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977677303934628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dad6e460-306d-4120-b4c3-7b2f37f7cb13 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.305535355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f1142bb-b659-4d74-a8c9-b6bb38757b1b name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.305611113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f1142bb-b659-4d74-a8c9-b6bb38757b1b name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.305918429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f1142bb-b659-4d74-a8c9-b6bb38757b1b name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.358548606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd7c7879-694a-4149-8557-4e1b85fd8059 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.358801359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd7c7879-694a-4149-8557-4e1b85fd8059 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.361440739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c16789a7-1e91-4d41-8753-5031f8101741 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.362008504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977677361975758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c16789a7-1e91-4d41-8753-5031f8101741 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.363040571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=647891e1-dac7-4346-93af-3957609a5003 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.363188518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=647891e1-dac7-4346-93af-3957609a5003 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.363507723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=647891e1-dac7-4346-93af-3957609a5003 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.411875344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79d368fc-1937-49f6-8404-58c8397189c8 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.412227587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79d368fc-1937-49f6-8404-58c8397189c8 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.413568718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1e814fe-43ad-4126-986b-0e4bc584f23e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.414087896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977677414063836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1e814fe-43ad-4126-986b-0e4bc584f23e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.415066569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b731f518-047a-4c3c-9327-27967935062d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.415127512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b731f518-047a-4c3c-9327-27967935062d name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:57 pause-377882 crio[3411]: time="2025-11-24 09:47:57.415287349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b731f518-047a-4c3c-9327-27967935062d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	59a294b884984       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago       Running             coredns                   0                   9b5f30a85369b       coredns-66bc5c9577-t7vnl               kube-system
	c17abfeac8a0d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago       Running             coredns                   0                   4a4bd4f481b91       coredns-66bc5c9577-fzcps               kube-system
	686ee6b9ed43a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   10 seconds ago      Running             kube-proxy                0                   610a405264659       kube-proxy-c42hb                       kube-system
	3cdb365df9dd6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   24 seconds ago      Running             etcd                      4                   491cdd1dfdcd5       etcd-pause-377882                      kube-system
	ac6118758ef1f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   24 seconds ago      Running             kube-apiserver            1                   bef1e65b3d3d2       kube-apiserver-pause-377882            kube-system
	e7a2c5a8afdd3       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   24 seconds ago      Running             kube-scheduler            4                   bc2b410e495ea       kube-scheduler-pause-377882            kube-system
	fc72a91df2fc4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   24 seconds ago      Running             kube-controller-manager   8                   eca054a4811a3       kube-controller-manager-pause-377882   kube-system
	
	
	==> coredns [59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               pause-377882
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-377882
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=pause-377882
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_47_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:47:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-377882
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:47:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    pause-377882
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5575a5935f24e97a9af69e7eb2c61b2
	  System UUID:                e5575a59-35f2-4e97-a9af-69e7eb2c61b2
	  Boot ID:                    349b1732-18e8-44df-80c0-d067b057d1c9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fzcps                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     11s
	  kube-system                 coredns-66bc5c9577-t7vnl                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     11s
	  kube-system                 etcd-pause-377882                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         17s
	  kube-system                 kube-apiserver-pause-377882             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-pause-377882    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-c42hb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-pause-377882             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-377882 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-377882 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-377882 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17s                kubelet          Node pause-377882 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s                kubelet          Node pause-377882 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s                kubelet          Node pause-377882 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node pause-377882 event: Registered Node pause-377882 in Controller
	
	
	==> dmesg <==
	[  +0.001585] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002838] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.264832] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.121412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.124814] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.107066] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.193049] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000450] kauditd_printk_skb: 19 callbacks suppressed
	[Nov24 09:41] kauditd_printk_skb: 224 callbacks suppressed
	[ +31.618388] kauditd_printk_skb: 38 callbacks suppressed
	[Nov24 09:43] kauditd_printk_skb: 261 callbacks suppressed
	[  +2.352837] kauditd_printk_skb: 171 callbacks suppressed
	[  +7.638545] kauditd_printk_skb: 47 callbacks suppressed
	[ +13.540927] kauditd_printk_skb: 70 callbacks suppressed
	[Nov24 09:44] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.604735] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 09:45] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 09:47] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.725524] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.134109] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.209350] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.767250] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.416445] kauditd_printk_skb: 140 callbacks suppressed
	
	
	==> etcd [3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40] <==
	{"level":"warn","ts":"2025-11-24T09:47:36.021097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.049597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.052251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.064003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.079843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.097871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.112693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.129342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.158539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.162964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.181010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.189287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.201278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.212143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.233861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.239559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.253505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.270561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.284914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.291631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.396417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:40.095065Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.4583ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16507270112499124295 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/roles/kube-public/kubeadm:bootstrap-signer-clusterinfo\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-public/kubeadm:bootstrap-signer-clusterinfo\" value_size:284 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:47:40.095196Z","caller":"traceutil/trace.go:172","msg":"trace[1848518674] transaction","detail":"{read_only:false; response_revision:259; number_of_response:1; }","duration":"282.572659ms","start":"2025-11-24T09:47:39.812609Z","end":"2025-11-24T09:47:40.095182Z","steps":["trace[1848518674] 'process raft request'  (duration: 64.461978ms)","trace[1848518674] 'compare'  (duration: 217.24139ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:47:40.637477Z","caller":"traceutil/trace.go:172","msg":"trace[167518157] transaction","detail":"{read_only:false; response_revision:266; number_of_response:1; }","duration":"126.849819ms","start":"2025-11-24T09:47:40.510614Z","end":"2025-11-24T09:47:40.637464Z","steps":["trace[167518157] 'process raft request'  (duration: 126.25887ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:47:40.637622Z","caller":"traceutil/trace.go:172","msg":"trace[1803810617] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"143.568056ms","start":"2025-11-24T09:47:40.494037Z","end":"2025-11-24T09:47:40.637605Z","steps":["trace[1803810617] 'process raft request'  (duration: 105.954011ms)","trace[1803810617] 'compare'  (duration: 35.876493ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:47:57 up 7 min,  0 users,  load average: 0.51, 0.42, 0.23
	Linux pause-377882 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a] <==
	I1124 09:47:37.329941       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:47:37.335351       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 09:47:37.336318       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:47:37.341421       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:37.344182       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:47:37.418501       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:37.428012       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:47:37.438869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:47:38.137646       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:47:38.143980       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:47:38.144081       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:47:39.230265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:47:39.306067       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:47:39.472892       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:47:39.486788       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.144]
	I1124 09:47:39.487985       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:47:39.496184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:47:40.341551       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:47:40.672435       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:47:40.711934       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:47:40.733997       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:47:45.989814       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:47:46.236451       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:46.245197       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:46.283852       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1] <==
	I1124 09:47:45.349010       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 09:47:45.354949       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 09:47:45.356796       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 09:47:45.356886       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 09:47:45.356917       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 09:47:45.356934       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 09:47:45.357326       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:47:45.363858       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:47:45.367832       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:47:45.371889       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:47:45.381808       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:47:45.381852       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 09:47:45.381866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 09:47:45.381820       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:47:45.381937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:47:45.382249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:47:45.382287       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 09:47:45.383839       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:47:45.384886       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:47:45.385008       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:47:45.385295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:47:45.385388       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:47:45.387239       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:47:45.389108       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:47:45.390179       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-377882" podCIDRs=["10.244.0.0/24"]
	
	
	==> kube-proxy [686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566] <==
	I1124 09:47:47.441082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:47:47.541264       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:47:47.546898       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.144"]
	E1124 09:47:47.547101       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:47:47.705554       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 09:47:47.705654       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 09:47:47.705688       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:47:47.756293       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:47:47.756887       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:47:47.756903       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:47:47.763210       1 config.go:200] "Starting service config controller"
	I1124 09:47:47.763902       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:47:47.764111       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:47:47.766582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:47:47.764127       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:47:47.766785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:47:47.775665       1 config.go:309] "Starting node config controller"
	I1124 09:47:47.778989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:47:47.779002       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:47:47.868293       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:47:47.868329       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:47:47.868357       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e7a2c5a8afdd30b03adb3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77] <==
	E1124 09:47:37.385829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:47:37.385832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:47:37.386079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:47:37.386044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:47:37.385998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:47:37.386185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:47:37.386385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:47:38.191894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:47:38.197806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:47:38.294301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:47:38.350340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:47:38.451555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:47:38.465646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:47:38.486241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:47:38.554016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:47:38.636036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:47:38.666634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:47:38.673185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:47:38.748395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:47:38.789545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:47:38.791750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:47:38.808572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:47:38.809019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:47:38.811377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1124 09:47:41.048211       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.556177   13719 apiserver.go:52] "Watching apiserver"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.591966   13719 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.635866   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-377882" podStartSLOduration=1.6358439630000001 podStartE2EDuration="1.635843963s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.616678422 +0000 UTC m=+1.167262292" watchObservedRunningTime="2025-11-24 09:47:41.635843963 +0000 UTC m=+1.186427828"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.651598   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-377882" podStartSLOduration=1.6515836 podStartE2EDuration="1.6515836s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.638323086 +0000 UTC m=+1.188906968" watchObservedRunningTime="2025-11-24 09:47:41.6515836 +0000 UTC m=+1.202167471"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.667640   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-377882" podStartSLOduration=1.667622019 podStartE2EDuration="1.667622019s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.651950233 +0000 UTC m=+1.202534106" watchObservedRunningTime="2025-11-24 09:47:41.667622019 +0000 UTC m=+1.218205873"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.667870   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-377882" podStartSLOduration=1.667860601 podStartE2EDuration="1.667860601s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.666119355 +0000 UTC m=+1.216703226" watchObservedRunningTime="2025-11-24 09:47:41.667860601 +0000 UTC m=+1.218444471"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426841   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-lib-modules\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426922   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-kube-proxy\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426960   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-xtables-lock\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426986   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dgv\" (UniqueName: \"kubernetes.io/projected/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-kube-api-access-79dgv\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734266   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/3ff2e529-3c1f-431e-9199-bb2c04dbe874-kube-api-access-7nzt5\") pod \"coredns-66bc5c9577-t7vnl\" (UID: \"3ff2e529-3c1f-431e-9199-bb2c04dbe874\") " pod="kube-system/coredns-66bc5c9577-t7vnl"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734775   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9349d8e4-be24-4e97-bb02-f38fa659efba-config-volume\") pod \"coredns-66bc5c9577-fzcps\" (UID: \"9349d8e4-be24-4e97-bb02-f38fa659efba\") " pod="kube-system/coredns-66bc5c9577-fzcps"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734831   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcr9\" (UniqueName: \"kubernetes.io/projected/9349d8e4-be24-4e97-bb02-f38fa659efba-kube-api-access-ktcr9\") pod \"coredns-66bc5c9577-fzcps\" (UID: \"9349d8e4-be24-4e97-bb02-f38fa659efba\") " pod="kube-system/coredns-66bc5c9577-fzcps"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734869   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ff2e529-3c1f-431e-9199-bb2c04dbe874-config-volume\") pod \"coredns-66bc5c9577-t7vnl\" (UID: \"3ff2e529-3c1f-431e-9199-bb2c04dbe874\") " pod="kube-system/coredns-66bc5c9577-t7vnl"
	Nov 24 09:47:48 pause-377882 kubelet[13719]: I1124 09:47:48.816355   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c42hb" podStartSLOduration=2.81633057 podStartE2EDuration="2.81633057s" podCreationTimestamp="2025-11-24 09:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:47.833599322 +0000 UTC m=+7.384183197" watchObservedRunningTime="2025-11-24 09:47:48.81633057 +0000 UTC m=+8.366914443"
	Nov 24 09:47:48 pause-377882 kubelet[13719]: I1124 09:47:48.849800   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t7vnl" podStartSLOduration=2.84978381 podStartE2EDuration="2.84978381s" podCreationTimestamp="2025-11-24 09:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:48.819221372 +0000 UTC m=+8.369805244" watchObservedRunningTime="2025-11-24 09:47:48.84978381 +0000 UTC m=+8.400367682"
	Nov 24 09:47:49 pause-377882 kubelet[13719]: I1124 09:47:49.805365   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:47:49 pause-377882 kubelet[13719]: I1124 09:47:49.805813   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: E1124 09:47:50.686171   13719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763977670685802619 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: E1124 09:47:50.686198   13719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763977670685802619 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: I1124 09:47:50.822680   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fzcps" podStartSLOduration=4.8226606929999996 podStartE2EDuration="4.822660693s" podCreationTimestamp="2025-11-24 09:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:48.851385297 +0000 UTC m=+8.401969170" watchObservedRunningTime="2025-11-24 09:47:50.822660693 +0000 UTC m=+10.373244563"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: I1124 09:47:50.968151   13719 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: I1124 09:47:50.970224   13719 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:47:51 pause-377882 kubelet[13719]: I1124 09:47:51.706586   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:47:54 pause-377882 kubelet[13719]: I1124 09:47:54.125021   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-377882 -n pause-377882
helpers_test.go:269: (dbg) Run:  kubectl --context pause-377882 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-377882 -n pause-377882
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-377882 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-377882 logs -n 25: (1.186228557s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p NoKubernetes-544416                                                                                                                                                                                                                      │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ start   │ -p NoKubernetes-544416 --driver=kvm2  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ ssh     │ -p NoKubernetes-544416 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │                     │
	│ delete  │ -p NoKubernetes-544416                                                                                                                                                                                                                      │ NoKubernetes-544416    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ start   │ -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:45 UTC │
	│ ssh     │ cert-options-322176 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ ssh     │ -p cert-options-322176 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ delete  │ -p cert-options-322176                                                                                                                                                                                                                      │ cert-options-322176    │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:43 UTC │ 24 Nov 25 09:45 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-960867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:45 UTC │
	│ stop    │ -p old-k8s-version-960867 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-778378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:45 UTC │
	│ stop    │ -p no-preload-778378 --alsologtostderr -v=3                                                                                                                                                                                                 │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:45 UTC │ 24 Nov 25 09:47 UTC │
	│ start   │ -p cert-expiration-986811 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-986811 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-960867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:46 UTC │
	│ start   │ -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:47 UTC │
	│ delete  │ -p cert-expiration-986811                                                                                                                                                                                                                   │ cert-expiration-986811 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │ 24 Nov 25 09:46 UTC │
	│ start   │ -p embed-certs-626350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-626350     │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-778378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ start   │ -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-778378      │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │                     │
	│ image   │ old-k8s-version-960867 image list --format=json                                                                                                                                                                                             │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ pause   │ -p old-k8s-version-960867 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ unpause │ -p old-k8s-version-960867 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ delete  │ -p old-k8s-version-960867                                                                                                                                                                                                                   │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │ 24 Nov 25 09:47 UTC │
	│ delete  │ -p old-k8s-version-960867                                                                                                                                                                                                                   │ old-k8s-version-960867 │ jenkins │ v1.37.0 │ 24 Nov 25 09:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:47:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:47:12.259632   49468 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:47:12.259769   49468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:47:12.259780   49468 out.go:374] Setting ErrFile to fd 2...
	I1124 09:47:12.259786   49468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:47:12.260126   49468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:47:12.260737   49468 out.go:368] Setting JSON to false
	I1124 09:47:12.261966   49468 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5368,"bootTime":1763972264,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:47:12.262054   49468 start.go:143] virtualization: kvm guest
	I1124 09:47:12.264154   49468 out.go:179] * [no-preload-778378] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:47:12.265498   49468 notify.go:221] Checking for updates...
	I1124 09:47:12.265516   49468 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:47:12.266992   49468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:47:12.268427   49468 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:12.269748   49468 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 09:47:12.270974   49468 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:47:12.272264   49468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:47:12.273852   49468 config.go:182] Loaded profile config "no-preload-778378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:12.274329   49468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:47:12.318985   49468 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 09:47:12.320306   49468 start.go:309] selected driver: kvm2
	I1124 09:47:12.320328   49468 start.go:927] validating driver "kvm2" against &{Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:12.320475   49468 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:47:12.321966   49468 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:12.322013   49468 cni.go:84] Creating CNI manager for ""
	I1124 09:47:12.322081   49468 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:12.322130   49468 start.go:353] cluster config:
	{Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:12.322287   49468 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:12.324718   49468 out.go:179] * Starting "no-preload-778378" primary control-plane node in "no-preload-778378" cluster
	I1124 09:47:09.795644   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:09.796396   49230 main.go:143] libmachine: no network interface addresses found for domain embed-certs-626350 (source=lease)
	I1124 09:47:09.796416   49230 main.go:143] libmachine: trying to list again with source=arp
	I1124 09:47:09.796822   49230 main.go:143] libmachine: unable to find current IP address of domain embed-certs-626350 in network mk-embed-certs-626350 (interfaces detected: [])
	I1124 09:47:09.796856   49230 retry.go:31] will retry after 1.912431309s: waiting for domain to come up
	I1124 09:47:11.711443   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:11.712106   49230 main.go:143] libmachine: no network interface addresses found for domain embed-certs-626350 (source=lease)
	I1124 09:47:11.712122   49230 main.go:143] libmachine: trying to list again with source=arp
	I1124 09:47:11.712511   49230 main.go:143] libmachine: unable to find current IP address of domain embed-certs-626350 in network mk-embed-certs-626350 (interfaces detected: [])
	I1124 09:47:11.712547   49230 retry.go:31] will retry after 3.15029127s: waiting for domain to come up
	I1124 09:47:09.691398   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:09.691423   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:09.691429   45116 cri.go:89] found id: ""
	I1124 09:47:09.691437   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:09.691510   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:09.698387   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:09.704968   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:09.705033   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:09.739216   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:09.739244   45116 cri.go:89] found id: ""
	I1124 09:47:09.739255   45116 logs.go:282] 1 containers: [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:09.739327   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:09.743747   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:09.743824   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:09.782911   45116 cri.go:89] found id: ""
	I1124 09:47:09.782938   45116 logs.go:282] 0 containers: []
	W1124 09:47:09.782947   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:09.782956   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:09.782967   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:09.828128   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:09.828183   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:10.161168   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:10.161215   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:10.207459   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:10.207509   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:10.347266   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:10.347322   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:10.370233   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:10.370274   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:10.451901   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:10.451922   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:10.451936   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:10.488059   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:10.488098   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:10.561847   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:10.561882   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:10.645070   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:10.645127   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:10.736900   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:10.736960   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:10.801138   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:10.801193   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:10.859639   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:10.859679   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:10.910532   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:10.910573   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:13.464303   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:13.490945   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:47:13.491026   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:47:13.525218   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:13.525244   45116 cri.go:89] found id: ""
	I1124 09:47:13.525254   45116 logs.go:282] 1 containers: [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a]
	I1124 09:47:13.525316   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.530122   45116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:47:13.530220   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:47:13.566985   45116 cri.go:89] found id: "644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:13.567013   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:13.567019   45116 cri.go:89] found id: ""
	I1124 09:47:13.567028   45116 logs.go:282] 2 containers: [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2]
	I1124 09:47:13.567091   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.575704   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.580061   45116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:47:13.580141   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:47:13.612702   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:13.612730   45116 cri.go:89] found id: ""
	I1124 09:47:13.612749   45116 logs.go:282] 1 containers: [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154]
	I1124 09:47:13.612813   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.617252   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:47:13.617323   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:47:13.654192   45116 cri.go:89] found id: "af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:13.654219   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:13.654226   45116 cri.go:89] found id: ""
	I1124 09:47:13.654235   45116 logs.go:282] 2 containers: [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560]
	I1124 09:47:13.654298   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.660068   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.664712   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:47:13.664789   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:47:13.714327   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:13.714354   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:13.714367   45116 cri.go:89] found id: ""
	I1124 09:47:13.714376   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:13.714436   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.721423   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.727043   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:13.727129   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:13.773234   45116 cri.go:89] found id: "20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:13.773265   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:13.773271   45116 cri.go:89] found id: ""
	I1124 09:47:13.773280   45116 logs.go:282] 2 containers: [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:13.773356   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.779580   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:13.784971   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:13.785042   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:13.819859   45116 cri.go:89] found id: ""
	I1124 09:47:13.819895   45116 logs.go:282] 0 containers: []
	W1124 09:47:13.819909   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:13.819928   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:13.819949   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:13.864915   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:13.864951   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:13.955155   45116 logs.go:123] Gathering logs for kube-controller-manager [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602] ...
	I1124 09:47:13.955199   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:13.999353   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:13.999386   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:14.112379   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:14.112416   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:14.130303   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:14.130332   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:14.203124   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:14.203172   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:14.252841   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:14.252885   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:14.289469   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:14.289528   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:14.326709   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:14.326749   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:14.378123   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:14.378181   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:14.427299   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:14.427330   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:14.480509   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:14.480558   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:14.524631   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:14.524662   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:09.896468   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:10.395925   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:10.895591   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:10.941745   49070 api_server.go:72] duration metric: took 3.046449146s to wait for apiserver process to appear ...
	I1124 09:47:10.941784   49070 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:10.941805   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:13.591244   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:47:13.591278   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:47:13.591296   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:13.673996   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:47:13.674034   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:47:13.942381   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:13.960813   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:47:13.960850   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:47:14.442327   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:14.449600   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:47:14.449630   49070 api_server.go:103] status: https://192.168.83.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:47:14.942575   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:14.947180   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 200:
	ok
	I1124 09:47:14.954012   49070 api_server.go:141] control plane version: v1.28.0
	I1124 09:47:14.954039   49070 api_server.go:131] duration metric: took 4.012247353s to wait for apiserver health ...
	I1124 09:47:14.954052   49070 cni.go:84] Creating CNI manager for ""
	I1124 09:47:14.954060   49070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:14.955770   49070 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:47:14.956971   49070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:47:14.970168   49070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:47:14.994474   49070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:15.002290   49070 system_pods.go:59] 8 kube-system pods found
	I1124 09:47:15.002319   49070 system_pods.go:61] "coredns-5dd5756b68-qjfrd" [4fd2b02c-5aae-488b-ab0c-c607053b2c61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:15.002327   49070 system_pods.go:61] "etcd-old-k8s-version-960867" [cd6416ef-d54b-45e0-b6a4-b42bcc4e02c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:15.002335   49070 system_pods.go:61] "kube-apiserver-old-k8s-version-960867" [156bcf7a-4753-4df7-b930-852c4e0b254d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:47:15.002343   49070 system_pods.go:61] "kube-controller-manager-old-k8s-version-960867" [77928b09-b20f-4328-8ef0-1545a4fe215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:15.002348   49070 system_pods.go:61] "kube-proxy-lmg4n" [d8bf94d7-0452-410a-9471-be83743449f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:15.002354   49070 system_pods.go:61] "kube-scheduler-old-k8s-version-960867" [f0eec69c-765c-4c84-b554-8236cc26249c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:15.002366   49070 system_pods.go:61] "metrics-server-57f55c9bc5-lbrng" [4b2cfd75-974a-4544-b013-3b8daa376685] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:47:15.002376   49070 system_pods.go:61] "storage-provisioner" [71f29f3d-5b04-4cb9-aab8-233ad3e7fdab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:15.002382   49070 system_pods.go:74] duration metric: took 7.889088ms to wait for pod list to return data ...
	I1124 09:47:15.002391   49070 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:15.009889   49070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:15.009914   49070 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:15.009927   49070 node_conditions.go:105] duration metric: took 7.532244ms to run NodePressure ...
	I1124 09:47:15.009971   49070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:15.235683   49070 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 09:47:15.239554   49070 kubeadm.go:744] kubelet initialised
	I1124 09:47:15.239578   49070 kubeadm.go:745] duration metric: took 3.874474ms waiting for restarted kubelet to initialise ...
	I1124 09:47:15.239592   49070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:47:15.257209   49070 ops.go:34] apiserver oom_adj: -16
	I1124 09:47:15.257231   49070 kubeadm.go:602] duration metric: took 8.669905687s to restartPrimaryControlPlane
	I1124 09:47:15.257240   49070 kubeadm.go:403] duration metric: took 8.723012978s to StartCluster
	I1124 09:47:15.257255   49070 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:15.257317   49070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:15.258046   49070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:15.258267   49070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.182 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:15.258334   49070 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:15.258431   49070 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258448   49070 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-960867"
	W1124 09:47:15.258456   49070 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:47:15.258482   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	I1124 09:47:15.258461   49070 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258509   49070 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-960867"
	I1124 09:47:15.258505   49070 addons.go:70] Setting dashboard=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258524   49070 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-960867"
	I1124 09:47:15.258576   49070 addons.go:239] Setting addon dashboard=true in "old-k8s-version-960867"
	W1124 09:47:15.258587   49070 addons.go:248] addon dashboard should already be in state true
	I1124 09:47:15.258598   49070 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-960867"
	I1124 09:47:15.258613   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	W1124 09:47:15.258616   49070 addons.go:248] addon metrics-server should already be in state true
	I1124 09:47:15.258546   49070 config.go:182] Loaded profile config "old-k8s-version-960867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:47:15.258661   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	I1124 09:47:15.259939   49070 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:15.261371   49070 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:47:15.261386   49070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:15.261438   49070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:15.262388   49070 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-960867"
	W1124 09:47:15.262406   49070 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:47:15.262427   49070 host.go:66] Checking if "old-k8s-version-960867" exists ...
	I1124 09:47:15.262624   49070 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:47:15.262672   49070 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.262684   49070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:15.262631   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:47:15.262730   49070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:47:15.264524   49070 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:15.264539   49070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:15.266016   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266045   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266525   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.266562   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266596   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.266630   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.266780   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.266924   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.267639   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.268068   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.268100   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.268282   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.268396   49070 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:47:12.326011   49468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:12.326190   49468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/config.json ...
	I1124 09:47:12.326329   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:12.326476   49468 start.go:360] acquireMachinesLock for no-preload-778378: {Name:mk7b5988e566cc8ac324d849b09ff116b4f24553 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 09:47:12.620410   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:12.919094   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:13.217519   49468 cache.go:107] acquiring lock: {Name:mk873476b8b51c5ad30a5f207562c122a407baa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217516   49468 cache.go:107] acquiring lock: {Name:mkd012b56d6bb314838e8477fa61cbc9a5cb6182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217569   49468 cache.go:107] acquiring lock: {Name:mk7b9d9c6ed27d19c384d6cbe702bfd1c838c06e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217641   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:13.217648   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:13.217626   49468 cache.go:107] acquiring lock: {Name:mk843be7defe78f14bd5310432fc15bd3fb06fcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217652   49468 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 107.897µs
	I1124 09:47:13.217662   49468 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:13.217659   49468 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 164.768µs
	I1124 09:47:13.217630   49468 cache.go:107] acquiring lock: {Name:mk59e7d3324e6d5caf067ed3caccff0e089892d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217677   49468 cache.go:107] acquiring lock: {Name:mk8faa0d7d5001227c8e0f6859d07215668f8c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217684   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:13.217693   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:13.217677   49468 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217697   49468 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 73.562µs
	I1124 09:47:13.217708   49468 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217705   49468 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 209.868µs
	I1124 09:47:13.217719   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:13.217683   49468 cache.go:107] acquiring lock: {Name:mk25a8e984499d9056c7556923373a6a0424ac0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217727   49468 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 135.845µs
	I1124 09:47:13.217734   49468 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:13.217778   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:13.217691   49468 cache.go:107] acquiring lock: {Name:mkc9a0c6b55838e55cce5ad7bc53cddbd14b524c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:13.217799   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:13.217797   49468 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 121.913µs
	I1124 09:47:13.217814   49468 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 189.002µs
	I1124 09:47:13.217824   49468 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:13.217829   49468 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217723   49468 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:13.217884   49468 cache.go:115] /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:13.217905   49468 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 261.163µs
	I1124 09:47:13.217913   49468 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:13.217932   49468 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:14.864324   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:14.865044   49230 main.go:143] libmachine: no network interface addresses found for domain embed-certs-626350 (source=lease)
	I1124 09:47:14.865056   49230 main.go:143] libmachine: trying to list again with source=arp
	I1124 09:47:14.865478   49230 main.go:143] libmachine: unable to find current IP address of domain embed-certs-626350 in network mk-embed-certs-626350 (interfaces detected: [])
	I1124 09:47:14.865510   49230 retry.go:31] will retry after 3.41691704s: waiting for domain to come up
	I1124 09:47:15.269560   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:47:15.269572   49070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:47:15.272007   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.272425   49070 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:0e:4a", ip: ""} in network mk-old-k8s-version-960867: {Iface:virbr5 ExpiryTime:2025-11-24 10:46:56 +0000 UTC Type:0 Mac:52:54:00:fa:0e:4a Iaid: IPaddr:192.168.83.182 Prefix:24 Hostname:old-k8s-version-960867 Clientid:01:52:54:00:fa:0e:4a}
	I1124 09:47:15.272467   49070 main.go:143] libmachine: domain old-k8s-version-960867 has defined IP address 192.168.83.182 and MAC address 52:54:00:fa:0e:4a in network mk-old-k8s-version-960867
	I1124 09:47:15.272646   49070 sshutil.go:53] new ssh client: &{IP:192.168.83.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/old-k8s-version-960867/id_rsa Username:docker}
	I1124 09:47:15.475954   49070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:15.503781   49070 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-960867" to be "Ready" ...
	I1124 09:47:15.631576   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:15.634651   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:47:15.634676   49070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:47:15.642857   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:47:15.642877   49070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:47:15.654687   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.676828   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:47:15.676858   49070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:47:15.730265   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:47:15.730299   49070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:47:15.730496   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:47:15.730526   49070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:47:15.800810   49070 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:47:15.800847   49070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:47:15.812108   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:47:15.812149   49070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:47:15.894307   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:47:15.894341   49070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:47:15.918524   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:47:16.026243   49070 node_ready.go:49] node "old-k8s-version-960867" is "Ready"
	I1124 09:47:16.026284   49070 node_ready.go:38] duration metric: took 522.471936ms for node "old-k8s-version-960867" to be "Ready" ...
	I1124 09:47:16.026303   49070 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:16.026361   49070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:16.043565   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:47:16.043594   49070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:47:16.169019   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:47:16.169049   49070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:47:16.244520   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:47:16.244543   49070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:47:16.332639   49070 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:47:16.332691   49070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:47:16.382826   49070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:47:17.218202   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.586590141s)
	I1124 09:47:17.620566   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.96583784s)
	I1124 09:47:17.861066   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.942492809s)
	I1124 09:47:17.861119   49070 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-960867"
	I1124 09:47:17.861074   49070 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.834691962s)
	I1124 09:47:17.861150   49070 api_server.go:72] duration metric: took 2.602858171s to wait for apiserver process to appear ...
	I1124 09:47:17.861180   49070 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:17.861216   49070 api_server.go:253] Checking apiserver healthz at https://192.168.83.182:8443/healthz ...
	I1124 09:47:17.875545   49070 api_server.go:279] https://192.168.83.182:8443/healthz returned 200:
	ok
	I1124 09:47:17.878077   49070 api_server.go:141] control plane version: v1.28.0
	I1124 09:47:17.878113   49070 api_server.go:131] duration metric: took 16.912947ms to wait for apiserver health ...
	I1124 09:47:17.878127   49070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:17.888070   49070 system_pods.go:59] 8 kube-system pods found
	I1124 09:47:17.888101   49070 system_pods.go:61] "coredns-5dd5756b68-qjfrd" [4fd2b02c-5aae-488b-ab0c-c607053b2c61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:17.888109   49070 system_pods.go:61] "etcd-old-k8s-version-960867" [cd6416ef-d54b-45e0-b6a4-b42bcc4e02c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:17.888116   49070 system_pods.go:61] "kube-apiserver-old-k8s-version-960867" [156bcf7a-4753-4df7-b930-852c4e0b254d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:47:17.888121   49070 system_pods.go:61] "kube-controller-manager-old-k8s-version-960867" [77928b09-b20f-4328-8ef0-1545a4fe215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:17.888126   49070 system_pods.go:61] "kube-proxy-lmg4n" [d8bf94d7-0452-410a-9471-be83743449f4] Running
	I1124 09:47:17.888140   49070 system_pods.go:61] "kube-scheduler-old-k8s-version-960867" [f0eec69c-765c-4c84-b554-8236cc26249c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:17.888147   49070 system_pods.go:61] "metrics-server-57f55c9bc5-lbrng" [4b2cfd75-974a-4544-b013-3b8daa376685] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:47:17.888154   49070 system_pods.go:61] "storage-provisioner" [71f29f3d-5b04-4cb9-aab8-233ad3e7fdab] Running
	I1124 09:47:17.888186   49070 system_pods.go:74] duration metric: took 10.051591ms to wait for pod list to return data ...
	I1124 09:47:17.888198   49070 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:47:17.894210   49070 default_sa.go:45] found service account: "default"
	I1124 09:47:17.894235   49070 default_sa.go:55] duration metric: took 6.032157ms for default service account to be created ...
	I1124 09:47:17.894245   49070 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:47:17.901024   49070 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:17.901062   49070 system_pods.go:89] "coredns-5dd5756b68-qjfrd" [4fd2b02c-5aae-488b-ab0c-c607053b2c61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:17.901073   49070 system_pods.go:89] "etcd-old-k8s-version-960867" [cd6416ef-d54b-45e0-b6a4-b42bcc4e02c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:17.901088   49070 system_pods.go:89] "kube-apiserver-old-k8s-version-960867" [156bcf7a-4753-4df7-b930-852c4e0b254d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:47:17.901103   49070 system_pods.go:89] "kube-controller-manager-old-k8s-version-960867" [77928b09-b20f-4328-8ef0-1545a4fe215d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:17.901110   49070 system_pods.go:89] "kube-proxy-lmg4n" [d8bf94d7-0452-410a-9471-be83743449f4] Running
	I1124 09:47:17.901119   49070 system_pods.go:89] "kube-scheduler-old-k8s-version-960867" [f0eec69c-765c-4c84-b554-8236cc26249c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:17.901126   49070 system_pods.go:89] "metrics-server-57f55c9bc5-lbrng" [4b2cfd75-974a-4544-b013-3b8daa376685] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:47:17.901133   49070 system_pods.go:89] "storage-provisioner" [71f29f3d-5b04-4cb9-aab8-233ad3e7fdab] Running
	I1124 09:47:17.901148   49070 system_pods.go:126] duration metric: took 6.896666ms to wait for k8s-apps to be running ...
	I1124 09:47:17.901178   49070 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:47:17.901241   49070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:18.410648   49070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.027767366s)
	I1124 09:47:18.410696   49070 system_svc.go:56] duration metric: took 509.531199ms WaitForService to wait for kubelet
	I1124 09:47:18.410720   49070 kubeadm.go:587] duration metric: took 3.152426043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:18.410798   49070 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:18.412368   49070 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-960867 addons enable metrics-server
	
	I1124 09:47:18.413790   49070 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1124 09:47:14.841629   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:14.841663   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:14.909793   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:17.410153   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:17.430096   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:47:17.430190   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:47:17.462256   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:17.462278   45116 cri.go:89] found id: ""
	I1124 09:47:17.462289   45116 logs.go:282] 1 containers: [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a]
	I1124 09:47:17.462353   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.467039   45116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:47:17.467120   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:47:17.505441   45116 cri.go:89] found id: "644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:17.505470   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:17.505476   45116 cri.go:89] found id: ""
	I1124 09:47:17.505487   45116 logs.go:282] 2 containers: [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2]
	I1124 09:47:17.505550   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.510071   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.514447   45116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:47:17.514515   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:47:17.548210   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:17.548235   45116 cri.go:89] found id: ""
	I1124 09:47:17.548246   45116 logs.go:282] 1 containers: [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154]
	I1124 09:47:17.548310   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.553880   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:47:17.553961   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:47:17.586651   45116 cri.go:89] found id: "af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:17.586683   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:17.586690   45116 cri.go:89] found id: ""
	I1124 09:47:17.586700   45116 logs.go:282] 2 containers: [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560]
	I1124 09:47:17.586774   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.591298   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.597170   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:47:17.597251   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:47:17.633809   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:17.633841   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:17.633849   45116 cri.go:89] found id: ""
	I1124 09:47:17.633862   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:17.633918   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.640075   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.646199   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:17.646281   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:17.685154   45116 cri.go:89] found id: "20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:17.685201   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:17.685208   45116 cri.go:89] found id: ""
	I1124 09:47:17.685219   45116 logs.go:282] 2 containers: [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:17.685284   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.691409   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:17.695847   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:17.695914   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:17.733789   45116 cri.go:89] found id: ""
	I1124 09:47:17.733817   45116 logs.go:282] 0 containers: []
	W1124 09:47:17.733829   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:17.733848   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:17.733862   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:17.825744   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:17.825781   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:17.861631   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:17.861659   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:17.905178   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:17.905223   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:17.926893   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:17.926926   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:17.982775   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:17.982812   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:18.021939   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:18.021972   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:18.089811   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:18.089843   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:18.140457   45116 logs.go:123] Gathering logs for kube-controller-manager [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602] ...
	I1124 09:47:18.140493   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:18.178392   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:18.178419   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:18.513554   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:18.513596   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:18.590097   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:18.590118   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:18.590134   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:18.645549   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:18.645585   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:18.689214   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:18.689250   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:18.724771   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:18.724798   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:18.414937   49070 addons.go:530] duration metric: took 3.156602517s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1124 09:47:18.416328   49070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:18.416345   49070 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:18.416357   49070 node_conditions.go:105] duration metric: took 5.553892ms to run NodePressure ...
	I1124 09:47:18.416369   49070 start.go:242] waiting for startup goroutines ...
	I1124 09:47:18.416378   49070 start.go:247] waiting for cluster config update ...
	I1124 09:47:18.416395   49070 start.go:256] writing updated cluster config ...
	I1124 09:47:18.416656   49070 ssh_runner.go:195] Run: rm -f paused
	I1124 09:47:18.425881   49070 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:18.436418   49070 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-qjfrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:19.789544   49468 start.go:364] duration metric: took 7.462995654s to acquireMachinesLock for "no-preload-778378"
	I1124 09:47:19.789623   49468 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:47:19.789632   49468 fix.go:54] fixHost starting: 
	I1124 09:47:19.791934   49468 fix.go:112] recreateIfNeeded on no-preload-778378: state=Stopped err=<nil>
	W1124 09:47:19.791963   49468 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:47:18.283800   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.284517   49230 main.go:143] libmachine: domain embed-certs-626350 has current primary IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.284533   49230 main.go:143] libmachine: found domain IP: 192.168.61.81
	I1124 09:47:18.284540   49230 main.go:143] libmachine: reserving static IP address...
	I1124 09:47:18.285114   49230 main.go:143] libmachine: unable to find host DHCP lease matching {name: "embed-certs-626350", mac: "52:54:00:21:fc:08", ip: "192.168.61.81"} in network mk-embed-certs-626350
	I1124 09:47:18.509180   49230 main.go:143] libmachine: reserved static IP address 192.168.61.81 for domain embed-certs-626350
	I1124 09:47:18.509208   49230 main.go:143] libmachine: waiting for SSH...
	I1124 09:47:18.509217   49230 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 09:47:18.513019   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.513547   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.513572   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.513861   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.514191   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.514207   49230 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 09:47:18.631820   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:18.632210   49230 main.go:143] libmachine: domain creation complete
	I1124 09:47:18.633710   49230 machine.go:94] provisionDockerMachine start ...
	I1124 09:47:18.636441   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.636859   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.636892   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.637061   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.637367   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.637381   49230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:47:18.752906   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 09:47:18.752948   49230 buildroot.go:166] provisioning hostname "embed-certs-626350"
	I1124 09:47:18.756769   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.757281   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.757318   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.757527   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.757742   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.757755   49230 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-626350 && echo "embed-certs-626350" | sudo tee /etc/hostname
	I1124 09:47:18.890389   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-626350
	
	I1124 09:47:18.893735   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.894243   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:18.894268   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:18.894517   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:18.894751   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:18.894769   49230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-626350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-626350/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-626350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:19.014701   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:19.014749   49230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 09:47:19.014806   49230 buildroot.go:174] setting up certificates
	I1124 09:47:19.014825   49230 provision.go:84] configureAuth start
	I1124 09:47:19.017620   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.018003   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.018024   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.020335   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.020808   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.020833   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.020973   49230 provision.go:143] copyHostCerts
	I1124 09:47:19.021017   49230 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem, removing ...
	I1124 09:47:19.021033   49230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem
	I1124 09:47:19.021102   49230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 09:47:19.021236   49230 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem, removing ...
	I1124 09:47:19.021245   49230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem
	I1124 09:47:19.021275   49230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 09:47:19.021345   49230 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem, removing ...
	I1124 09:47:19.021353   49230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem
	I1124 09:47:19.021383   49230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 09:47:19.021443   49230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.embed-certs-626350 san=[127.0.0.1 192.168.61.81 embed-certs-626350 localhost minikube]
	I1124 09:47:19.078151   49230 provision.go:177] copyRemoteCerts
	I1124 09:47:19.078217   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:19.080905   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.081347   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.081374   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.081529   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.169233   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:19.198798   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 09:47:19.228847   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:19.258355   49230 provision.go:87] duration metric: took 243.499136ms to configureAuth
	I1124 09:47:19.258384   49230 buildroot.go:189] setting minikube options for container-runtime
	I1124 09:47:19.258599   49230 config.go:182] Loaded profile config "embed-certs-626350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:47:19.261825   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.262357   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.262384   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.262639   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:19.262839   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:19.262853   49230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:19.526787   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:19.526820   49230 machine.go:97] duration metric: took 893.092379ms to provisionDockerMachine
	I1124 09:47:19.526834   49230 client.go:176] duration metric: took 19.6512214s to LocalClient.Create
	I1124 09:47:19.526862   49230 start.go:167] duration metric: took 19.651290285s to libmachine.API.Create "embed-certs-626350"
	I1124 09:47:19.526877   49230 start.go:293] postStartSetup for "embed-certs-626350" (driver="kvm2")
	I1124 09:47:19.526897   49230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:19.526982   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:19.530259   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.530687   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.530726   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.530931   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.618509   49230 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:19.623597   49230 info.go:137] Remote host: Buildroot 2025.02
	I1124 09:47:19.623622   49230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 09:47:19.623682   49230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 09:47:19.623786   49230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem -> 96292.pem in /etc/ssl/certs
	I1124 09:47:19.623900   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:47:19.636104   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:19.667047   49230 start.go:296] duration metric: took 140.150925ms for postStartSetup
	I1124 09:47:19.670434   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.670919   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.670948   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.671236   49230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/config.json ...
	I1124 09:47:19.671448   49230 start.go:128] duration metric: took 19.798158784s to createHost
	I1124 09:47:19.673938   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.674398   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.674476   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.674727   49230 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:19.674926   49230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.81 22 <nil> <nil>}
	I1124 09:47:19.674936   49230 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 09:47:19.789344   49230 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763977639.751016482
	
	I1124 09:47:19.789371   49230 fix.go:216] guest clock: 1763977639.751016482
	I1124 09:47:19.789380   49230 fix.go:229] Guest: 2025-11-24 09:47:19.751016482 +0000 UTC Remote: 2025-11-24 09:47:19.671461198 +0000 UTC m=+26.691953308 (delta=79.555284ms)
	I1124 09:47:19.789398   49230 fix.go:200] guest clock delta is within tolerance: 79.555284ms
	I1124 09:47:19.789403   49230 start.go:83] releasing machines lock for "embed-certs-626350", held for 19.916365823s
	I1124 09:47:19.792898   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.793295   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.793326   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.793884   49230 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:19.794001   49230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:19.797857   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.797936   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.798380   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.798411   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.798503   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:19.798549   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:19.798660   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.798881   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:19.885367   49230 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:19.924042   49230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:20.089152   49230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:47:20.098284   49230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:20.098350   49230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:20.119364   49230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:47:20.119392   49230 start.go:496] detecting cgroup driver to use...
	I1124 09:47:20.119457   49230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:20.139141   49230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:20.158478   49230 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:20.158557   49230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:20.177306   49230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:20.194720   49230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:20.355648   49230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:20.582731   49230 docker.go:234] disabling docker service ...
	I1124 09:47:20.582794   49230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:20.601075   49230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:20.621424   49230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:20.787002   49230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:20.944675   49230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:20.962188   49230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:20.985630   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:21.276908   49230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:21.277001   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.290782   49230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:21.290847   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.307918   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.326726   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.340332   49230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:21.354479   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.368743   49230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.393854   49230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:21.407231   49230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:21.420253   49230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 09:47:21.420343   49230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 09:47:21.443332   49230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:21.459149   49230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:21.664261   49230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:21.821845   49230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:21.821919   49230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:21.829623   49230 start.go:564] Will wait 60s for crictl version
	I1124 09:47:21.829693   49230 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.835015   49230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 09:47:21.878357   49230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 09:47:21.878454   49230 ssh_runner.go:195] Run: crio --version
	I1124 09:47:21.919435   49230 ssh_runner.go:195] Run: crio --version
	I1124 09:47:21.968298   49230 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1124 09:47:19.793645   49468 out.go:252] * Restarting existing kvm2 VM for "no-preload-778378" ...
	I1124 09:47:19.793695   49468 main.go:143] libmachine: starting domain...
	I1124 09:47:19.793709   49468 main.go:143] libmachine: ensuring networks are active...
	I1124 09:47:19.794921   49468 main.go:143] libmachine: Ensuring network default is active
	I1124 09:47:19.795508   49468 main.go:143] libmachine: Ensuring network mk-no-preload-778378 is active
	I1124 09:47:19.796583   49468 main.go:143] libmachine: getting domain XML...
	I1124 09:47:19.797865   49468 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-778378</name>
	  <uuid>2076f1b8-6857-452b-a9dc-78378add2d65</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/no-preload-778378.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8b:fd:5d'/>
	      <source network='mk-no-preload-778378'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e5:48:00'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 09:47:21.147488   49468 main.go:143] libmachine: waiting for domain to start...
	I1124 09:47:21.149177   49468 main.go:143] libmachine: domain is now running
	I1124 09:47:21.149198   49468 main.go:143] libmachine: waiting for IP...
	I1124 09:47:21.150043   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.150649   49468 main.go:143] libmachine: domain no-preload-778378 has current primary IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.150669   49468 main.go:143] libmachine: found domain IP: 192.168.72.119
	I1124 09:47:21.150677   49468 main.go:143] libmachine: reserving static IP address...
	I1124 09:47:21.151148   49468 main.go:143] libmachine: found host DHCP lease matching {name: "no-preload-778378", mac: "52:54:00:8b:fd:5d", ip: "192.168.72.119"} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:44:11 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:21.151213   49468 main.go:143] libmachine: skip adding static IP to network mk-no-preload-778378 - found existing host DHCP lease matching {name: "no-preload-778378", mac: "52:54:00:8b:fd:5d", ip: "192.168.72.119"}
	I1124 09:47:21.151234   49468 main.go:143] libmachine: reserved static IP address 192.168.72.119 for domain no-preload-778378
	I1124 09:47:21.151247   49468 main.go:143] libmachine: waiting for SSH...
	I1124 09:47:21.151257   49468 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 09:47:21.154085   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.154524   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:44:11 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:21.154548   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:21.154714   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:21.154925   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:21.154937   49468 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 09:47:21.972912   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:21.973385   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:21.973410   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:21.973774   49230 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:21.979349   49230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:22.001043   49230 kubeadm.go:884] updating cluster {Name:embed-certs-626350 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.2 ClusterName:embed-certs-626350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.81 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:22.001327   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:22.311077   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:22.600356   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:22.924604   49230 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:47:22.924763   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:21.259835   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:21.279406   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:47:21.279479   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:47:21.317875   45116 cri.go:89] found id: "fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:21.317904   45116 cri.go:89] found id: ""
	I1124 09:47:21.317915   45116 logs.go:282] 1 containers: [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a]
	I1124 09:47:21.317983   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.324465   45116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:47:21.324549   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:47:21.366544   45116 cri.go:89] found id: "644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	I1124 09:47:21.366572   45116 cri.go:89] found id: "0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:21.366580   45116 cri.go:89] found id: ""
	I1124 09:47:21.366591   45116 logs.go:282] 2 containers: [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2]
	I1124 09:47:21.366659   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.372428   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.376885   45116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:47:21.376951   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:47:21.415145   45116 cri.go:89] found id: "f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:21.415186   45116 cri.go:89] found id: ""
	I1124 09:47:21.415196   45116 logs.go:282] 1 containers: [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154]
	I1124 09:47:21.415260   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.421740   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:47:21.421807   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:47:21.468682   45116 cri.go:89] found id: "af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:21.468706   45116 cri.go:89] found id: "d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:21.468711   45116 cri.go:89] found id: ""
	I1124 09:47:21.468721   45116 logs.go:282] 2 containers: [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560]
	I1124 09:47:21.468788   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.473724   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.478444   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:47:21.478528   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:47:21.524915   45116 cri.go:89] found id: "c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:21.524948   45116 cri.go:89] found id: "83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:21.524955   45116 cri.go:89] found id: ""
	I1124 09:47:21.524965   45116 logs.go:282] 2 containers: [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1]
	I1124 09:47:21.525034   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.533205   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.539850   45116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:47:21.539954   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:47:21.582298   45116 cri.go:89] found id: "20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:21.582323   45116 cri.go:89] found id: "a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	I1124 09:47:21.582328   45116 cri.go:89] found id: ""
	I1124 09:47:21.582337   45116 logs.go:282] 2 containers: [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]
	I1124 09:47:21.582402   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.587194   45116 ssh_runner.go:195] Run: which crictl
	I1124 09:47:21.591951   45116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:47:21.592024   45116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:47:21.629236   45116 cri.go:89] found id: ""
	I1124 09:47:21.629265   45116 logs.go:282] 0 containers: []
	W1124 09:47:21.629287   45116 logs.go:284] No container was found matching "kindnet"
	I1124 09:47:21.629310   45116 logs.go:123] Gathering logs for coredns [f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154] ...
	I1124 09:47:21.629336   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c4196a1b46b7f4a73184bf02ebc10ea4e2d50950a5daa39cabd81b1443154"
	I1124 09:47:21.678519   45116 logs.go:123] Gathering logs for kube-proxy [c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02] ...
	I1124 09:47:21.678555   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56922ebf064a5044f3a0f188b3332ca50246501d9240f39979e015814963d02"
	I1124 09:47:21.736902   45116 logs.go:123] Gathering logs for kube-proxy [83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1] ...
	I1124 09:47:21.736949   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b3d7d28ee4f7f700a0cb6e62a36ee29a8072842cc7a46a44992cf8c01ea6e1"
	I1124 09:47:21.793813   45116 logs.go:123] Gathering logs for kubelet ...
	I1124 09:47:21.793850   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:47:21.915449   45116 logs.go:123] Gathering logs for kube-scheduler [af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671] ...
	I1124 09:47:21.915496   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af6e7e04c7e99c4a04893a4307a78c8ae57044c9f6db9b5f28f616f13d11b671"
	I1124 09:47:22.004739   45116 logs.go:123] Gathering logs for dmesg ...
	I1124 09:47:22.004771   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:47:22.024129   45116 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:47:22.024169   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:47:22.111618   45116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:47:22.111645   45116 logs.go:123] Gathering logs for etcd [0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2] ...
	I1124 09:47:22.111677   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b2d23c17a02827136d0f8b12520cfb12e967c98c414928906e0168ea4ba04d2"
	I1124 09:47:22.156847   45116 logs.go:123] Gathering logs for kube-scheduler [d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560] ...
	I1124 09:47:22.156879   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1df0b7cceff1dc90d3a20c75c0cd818e9eb21d2c7bacf3a175a67b4ddb29560"
	I1124 09:47:22.203259   45116 logs.go:123] Gathering logs for kube-controller-manager [20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602] ...
	I1124 09:47:22.203295   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a2d73a1b18200c2397889f4d436684130e01f56d14d1c93f3706ce97dc1602"
	I1124 09:47:22.238845   45116 logs.go:123] Gathering logs for kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5] ...
	I1124 09:47:22.238876   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	W1124 09:47:22.274501   45116 logs.go:130] failed kube-controller-manager [a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:47:22.267509   12970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist" containerID="a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	time="2025-11-24T09:47:22Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 09:47:22.267509   12970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist" containerID="a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5"
	time="2025-11-24T09:47:22Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5\": container with ID starting with a93c6bfb6601e5a42703ca459b12c430a8c96414e1cb76931d1175b607b195e5 not found: ID does not exist"
	
	** /stderr **
	I1124 09:47:22.274534   45116 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:47:22.274550   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:47:22.636354   45116 logs.go:123] Gathering logs for container status ...
	I1124 09:47:22.636389   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:47:22.687742   45116 logs.go:123] Gathering logs for kube-apiserver [fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a] ...
	I1124 09:47:22.687782   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc8449b5997606e75fb308840f0d005d88986ddef39d284f86ee0c3c85901e8a"
	I1124 09:47:22.743650   45116 logs.go:123] Gathering logs for etcd [644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3] ...
	I1124 09:47:22.743686   45116 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644daed8cda7adcc56fe473cff775cacb780d967fbcfd9408f9d4c7711942fb3"
	W1124 09:47:20.444964   49070 pod_ready.go:104] pod "coredns-5dd5756b68-qjfrd" is not "Ready", error: <nil>
	I1124 09:47:21.946679   49070 pod_ready.go:94] pod "coredns-5dd5756b68-qjfrd" is "Ready"
	I1124 09:47:21.946718   49070 pod_ready.go:86] duration metric: took 3.510269923s for pod "coredns-5dd5756b68-qjfrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:21.956408   49070 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:47:23.963341   49070 pod_ready.go:104] pod "etcd-old-k8s-version-960867" is not "Ready", error: <nil>
	I1124 09:47:24.216442   49468 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I1124 09:47:23.219039   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:23.504209   49230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:23.793505   49230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:23.832951   49230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1124 09:47:23.833038   49230 ssh_runner.go:195] Run: which lz4
	I1124 09:47:23.839070   49230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 09:47:23.845266   49230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 09:47:23.845316   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1124 09:47:25.174773   49230 crio.go:462] duration metric: took 1.335742598s to copy over tarball
	I1124 09:47:25.174848   49230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 09:47:26.894011   49230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.71912962s)
	I1124 09:47:26.894046   49230 crio.go:469] duration metric: took 1.719246192s to extract the tarball
	I1124 09:47:26.894056   49230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 09:47:26.938244   49230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:26.981460   49230 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:26.981494   49230 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:26.981502   49230 kubeadm.go:935] updating node { 192.168.61.81 8443 v1.34.2 crio true true} ...
	I1124 09:47:26.981587   49230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-626350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-626350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:26.981647   49230 ssh_runner.go:195] Run: crio config
	I1124 09:47:27.038918   49230 cni.go:84] Creating CNI manager for ""
	I1124 09:47:27.038943   49230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:27.038960   49230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:27.038994   49230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.81 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-626350 NodeName:embed-certs-626350 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:27.039138   49230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-626350"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:27.039231   49230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:47:27.052072   49230 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:27.052151   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:27.064788   49230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1124 09:47:27.088365   49230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:47:27.112857   49230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1124 09:47:27.133030   49230 ssh_runner.go:195] Run: grep 192.168.61.81	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:27.137548   49230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:27.155572   49230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:27.310364   49230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:27.346932   49230 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350 for IP: 192.168.61.81
	I1124 09:47:27.346967   49230 certs.go:195] generating shared ca certs ...
	I1124 09:47:27.346987   49230 certs.go:227] acquiring lock for ca certs: {Name:mkc847d4fb6fb61872e24a1bb00356ff9ef1a409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.347183   49230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key
	I1124 09:47:27.347228   49230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key
	I1124 09:47:27.347236   49230 certs.go:257] generating profile certs ...
	I1124 09:47:27.347295   49230 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.key
	I1124 09:47:27.347307   49230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.crt with IP's: []
	I1124 09:47:27.636103   49230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.crt ...
	I1124 09:47:27.636131   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.crt: {Name:mk340735111201655131ce5d89db6955bfd8290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.643441   49230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.key ...
	I1124 09:47:27.643491   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/client.key: {Name:mkf258701eb8ed1624ff1812815a2c975bcca668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.643645   49230 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c
	I1124 09:47:27.643666   49230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.81]
	I1124 09:47:27.744265   49230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c ...
	I1124 09:47:27.744293   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c: {Name:mk8ea75d55b15efe354dba9875eb752d0a05347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.744488   49230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c ...
	I1124 09:47:27.744506   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c: {Name:mk58d5d689d1c5e5243e47b705f1e000fdb59d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.744609   49230 certs.go:382] copying /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt.be156d5c -> /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt
	I1124 09:47:27.744685   49230 certs.go:386] copying /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key.be156d5c -> /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key
	I1124 09:47:27.744739   49230 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key
	I1124 09:47:27.744766   49230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt with IP's: []
	I1124 09:47:27.761784   49230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt ...
	I1124 09:47:27.761811   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt: {Name:mk4f44d39bc0c8806d7120167d989468c7835a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.761997   49230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key ...
	I1124 09:47:27.762014   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key: {Name:mk6c8b61277ff226ad7caaff39303caa33ed0c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:27.762225   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem (1338 bytes)
	W1124 09:47:27.762266   49230 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:27.762278   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:47:27.762301   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:27.762324   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:27.762348   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:27.762387   49230 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:27.762896   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:27.799260   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:27.835181   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:27.871331   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:27.909219   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:47:27.959186   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:25.285986   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:25.310454   45116 kubeadm.go:602] duration metric: took 4m14.294214023s to restartPrimaryControlPlane
	W1124 09:47:25.310536   45116 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:47:25.310655   45116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W1124 09:47:25.964788   49070 pod_ready.go:104] pod "etcd-old-k8s-version-960867" is not "Ready", error: <nil>
	W1124 09:47:28.310274   49070 pod_ready.go:104] pod "etcd-old-k8s-version-960867" is not "Ready", error: <nil>
	I1124 09:47:30.169294   45116 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.858611236s)
	I1124 09:47:30.169392   45116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:30.192823   45116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:47:30.207816   45116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:47:30.227083   45116 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:47:30.227105   45116 kubeadm.go:158] found existing configuration files:
	
	I1124 09:47:30.227187   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:47:30.245334   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:47:30.245402   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:47:30.262454   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:47:30.281200   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:47:30.281442   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:47:30.300737   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:47:30.319229   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:47:30.319299   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:47:30.337698   45116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:47:30.357706   45116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:47:30.357777   45116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:47:30.379012   45116 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 09:47:30.450856   45116 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:47:30.450934   45116 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:47:30.599580   45116 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:47:30.599741   45116 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:47:30.599882   45116 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:47:30.611941   45116 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:47:29.969460   49070 pod_ready.go:94] pod "etcd-old-k8s-version-960867" is "Ready"
	I1124 09:47:29.969495   49070 pod_ready.go:86] duration metric: took 8.013057466s for pod "etcd-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:29.980910   49070 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:29.991545   49070 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-960867" is "Ready"
	I1124 09:47:29.991573   49070 pod_ready.go:86] duration metric: took 10.63769ms for pod "kube-apiserver-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.001331   49070 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.024156   49070 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-960867" is "Ready"
	I1124 09:47:30.024205   49070 pod_ready.go:86] duration metric: took 22.84268ms for pod "kube-controller-manager-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.038886   49070 pod_ready.go:83] waiting for pod "kube-proxy-lmg4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.267707   49070 pod_ready.go:94] pod "kube-proxy-lmg4n" is "Ready"
	I1124 09:47:30.267743   49070 pod_ready.go:86] duration metric: took 228.825451ms for pod "kube-proxy-lmg4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.469447   49070 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.866953   49070 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-960867" is "Ready"
	I1124 09:47:30.866993   49070 pod_ready.go:86] duration metric: took 397.512223ms for pod "kube-scheduler-old-k8s-version-960867" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:30.867013   49070 pod_ready.go:40] duration metric: took 12.44109528s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:30.931711   49070 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:47:30.933482   49070 out.go:203] 
	W1124 09:47:30.934896   49070 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:47:30.936033   49070 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:47:30.937400   49070 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-960867" cluster and "default" namespace by default
	I1124 09:47:30.615363   45116 out.go:252]   - Generating certificates and keys ...
	I1124 09:47:30.615501   45116 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:47:30.615596   45116 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:47:30.615704   45116 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:47:30.615769   45116 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:47:30.615845   45116 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:47:30.615915   45116 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:47:30.615991   45116 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:47:30.616069   45116 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:47:30.616178   45116 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:47:30.616293   45116 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:47:30.616353   45116 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:47:30.616430   45116 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:47:30.710466   45116 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:47:30.889597   45116 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:47:31.280869   45116 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:47:31.529347   45116 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:47:31.804508   45116 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:47:31.805285   45116 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:47:31.809182   45116 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:47:30.296419   49468 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I1124 09:47:28.043772   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:28.174421   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/embed-certs-626350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:47:28.212152   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /usr/share/ca-certificates/96292.pem (1708 bytes)
	I1124 09:47:28.250367   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:28.284602   49230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem --> /usr/share/ca-certificates/9629.pem (1338 bytes)
	I1124 09:47:28.323091   49230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:28.344189   49230 ssh_runner.go:195] Run: openssl version
	I1124 09:47:28.351343   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96292.pem && ln -fs /usr/share/ca-certificates/96292.pem /etc/ssl/certs/96292.pem"
	I1124 09:47:28.366811   49230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96292.pem
	I1124 09:47:28.372317   49230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:42 /usr/share/ca-certificates/96292.pem
	I1124 09:47:28.372387   49230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96292.pem
	I1124 09:47:28.379880   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:28.397799   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:28.412722   49230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:28.419873   49230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:28.419941   49230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:28.427569   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:28.443947   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9629.pem && ln -fs /usr/share/ca-certificates/9629.pem /etc/ssl/certs/9629.pem"
	I1124 09:47:28.458562   49230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9629.pem
	I1124 09:47:28.463812   49230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:42 /usr/share/ca-certificates/9629.pem
	I1124 09:47:28.463873   49230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9629.pem
	I1124 09:47:28.471104   49230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9629.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:28.485441   49230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:28.490493   49230 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:47:28.490557   49230 kubeadm.go:401] StartCluster: {Name:embed-certs-626350 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:embed-certs-626350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.81 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:28.490633   49230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:28.490706   49230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:28.526176   49230 cri.go:89] found id: ""
	I1124 09:47:28.526261   49230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:28.538731   49230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:47:28.556711   49230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:47:28.572053   49230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:47:28.572093   49230 kubeadm.go:158] found existing configuration files:
	
	I1124 09:47:28.572147   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:47:28.587240   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:47:28.587313   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:47:28.604867   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:47:28.620518   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:47:28.620588   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:47:28.637209   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:47:28.652740   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:47:28.652816   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:47:28.669669   49230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:47:28.685604   49230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:47:28.685678   49230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:47:28.697996   49230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 09:47:28.905963   49230 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:47:31.814094   45116 out.go:252]   - Booting up control plane ...
	I1124 09:47:31.814296   45116 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:47:31.814460   45116 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:47:31.814610   45116 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:47:31.846212   45116 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:47:31.846503   45116 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:47:31.857349   45116 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:47:31.857574   45116 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:47:31.857680   45116 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:47:32.091015   45116 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:47:32.091227   45116 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:47:32.603864   45116 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 511.088713ms
	I1124 09:47:32.608667   45116 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:47:32.609024   45116 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.144:8443/livez
	I1124 09:47:32.609268   45116 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:47:32.609378   45116 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:47:33.297474   49468 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: connection refused
	I1124 09:47:36.426245   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:36.431569   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.432272   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.432307   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.432575   49468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/config.json ...
	I1124 09:47:36.432840   49468 machine.go:94] provisionDockerMachine start ...
	I1124 09:47:36.436142   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.436610   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.436646   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.437023   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:36.437334   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:36.437351   49468 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:47:36.560509   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 09:47:36.560558   49468 buildroot.go:166] provisioning hostname "no-preload-778378"
	I1124 09:47:36.564819   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.565392   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.565427   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.565689   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:36.565977   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:36.566008   49468 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-778378 && echo "no-preload-778378" | sudo tee /etc/hostname
	I1124 09:47:36.718605   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-778378
	
	I1124 09:47:36.722692   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.723181   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.723266   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.723614   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:36.723903   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:36.723928   49468 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778378' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778378/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778378' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:36.856289   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:36.856323   49468 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21978-5665/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-5665/.minikube}
	I1124 09:47:36.856354   49468 buildroot.go:174] setting up certificates
	I1124 09:47:36.856373   49468 provision.go:84] configureAuth start
	I1124 09:47:36.861471   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.861921   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.861950   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.865346   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.865793   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:36.865823   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:36.865982   49468 provision.go:143] copyHostCerts
	I1124 09:47:36.866039   49468 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem, removing ...
	I1124 09:47:36.866057   49468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem
	I1124 09:47:36.866128   49468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/ca.pem (1078 bytes)
	I1124 09:47:36.866301   49468 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem, removing ...
	I1124 09:47:36.866315   49468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem
	I1124 09:47:36.866352   49468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/cert.pem (1123 bytes)
	I1124 09:47:36.866440   49468 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem, removing ...
	I1124 09:47:36.866450   49468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem
	I1124 09:47:36.866478   49468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-5665/.minikube/key.pem (1675 bytes)
	I1124 09:47:36.866553   49468 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem org=jenkins.no-preload-778378 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-778378]
	I1124 09:47:37.079398   49468 provision.go:177] copyRemoteCerts
	I1124 09:47:37.079469   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:37.082568   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.083059   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.083087   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.083393   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:37.175510   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:37.218719   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:37.259590   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:47:36.511991   45116 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.903315485s
	I1124 09:47:37.385036   45116 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.776583383s
	I1124 09:47:39.612514   45116 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003630831s
	I1124 09:47:39.637176   45116 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:47:39.655928   45116 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:47:39.673008   45116 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:47:39.673307   45116 kubeadm.go:319] [mark-control-plane] Marking the node pause-377882 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:47:39.689759   45116 kubeadm.go:319] [bootstrap-token] Using token: dkfba4.wiyzaabyuc92dy77
	I1124 09:47:37.303189   49468 provision.go:87] duration metric: took 446.792901ms to configureAuth
	I1124 09:47:37.303220   49468 buildroot.go:189] setting minikube options for container-runtime
	I1124 09:47:37.303468   49468 config.go:182] Loaded profile config "no-preload-778378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:37.307947   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.308593   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.308683   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.309192   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:37.309697   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:37.309777   49468 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:37.654536   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:37.654657   49468 machine.go:97] duration metric: took 1.221731659s to provisionDockerMachine
	I1124 09:47:37.654684   49468 start.go:293] postStartSetup for "no-preload-778378" (driver="kvm2")
	I1124 09:47:37.654701   49468 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:37.654784   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:37.658780   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.659326   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.659368   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.659753   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:37.748941   49468 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:37.754891   49468 info.go:137] Remote host: Buildroot 2025.02
	I1124 09:47:37.754924   49468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/addons for local assets ...
	I1124 09:47:37.755011   49468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-5665/.minikube/files for local assets ...
	I1124 09:47:37.755121   49468 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem -> 96292.pem in /etc/ssl/certs
	I1124 09:47:37.755290   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:47:37.769292   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:37.805342   49468 start.go:296] duration metric: took 150.641101ms for postStartSetup
	I1124 09:47:37.805401   49468 fix.go:56] duration metric: took 18.015768823s for fixHost
	I1124 09:47:37.808800   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.809270   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.809331   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.809584   49468 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:37.809861   49468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I1124 09:47:37.809878   49468 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 09:47:37.923307   49468 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763977657.869488968
	
	I1124 09:47:37.923335   49468 fix.go:216] guest clock: 1763977657.869488968
	I1124 09:47:37.923345   49468 fix.go:229] Guest: 2025-11-24 09:47:37.869488968 +0000 UTC Remote: 2025-11-24 09:47:37.80540708 +0000 UTC m=+25.610301982 (delta=64.081888ms)
	I1124 09:47:37.923363   49468 fix.go:200] guest clock delta is within tolerance: 64.081888ms
	I1124 09:47:37.923369   49468 start.go:83] releasing machines lock for "no-preload-778378", held for 18.133776137s
	I1124 09:47:37.927122   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.927625   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.927678   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.928253   49468 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:37.928303   49468 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:37.932007   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932213   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932513   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.932549   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932576   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:37.932600   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:37.932734   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:37.932920   49468 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/no-preload-778378/id_rsa Username:docker}
	I1124 09:47:38.016517   49468 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:38.051815   49468 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:38.216433   49468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:47:38.228075   49468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:38.228180   49468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:38.257370   49468 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:47:38.257398   49468 start.go:496] detecting cgroup driver to use...
	I1124 09:47:38.257468   49468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:38.288592   49468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:38.314406   49468 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:38.314503   49468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:38.342931   49468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:38.367024   49468 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:38.550224   49468 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:38.799542   49468 docker.go:234] disabling docker service ...
	I1124 09:47:38.799618   49468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:38.828617   49468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:38.858526   49468 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:39.109058   49468 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:39.337875   49468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:39.370317   49468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:39.405202   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:39.716591   49468 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:39.716684   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.733646   49468 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:39.733737   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.749322   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.762122   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.776172   49468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:39.793655   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.808258   49468 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.837251   49468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:39.851250   49468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:39.866186   49468 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 09:47:39.866250   49468 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 09:47:39.893140   49468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:39.907382   49468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:40.073077   49468 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:40.230279   49468 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:40.230392   49468 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:40.238673   49468 start.go:564] Will wait 60s for crictl version
	I1124 09:47:40.238759   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:40.245716   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 09:47:40.299447   49468 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 09:47:40.299569   49468 ssh_runner.go:195] Run: crio --version
	I1124 09:47:40.343754   49468 ssh_runner.go:195] Run: crio --version
	I1124 09:47:40.386006   49468 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1124 09:47:39.692299   45116 out.go:252]   - Configuring RBAC rules ...
	I1124 09:47:39.692438   45116 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:47:39.701326   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:47:39.711822   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:47:39.717209   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:47:39.727232   45116 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:47:39.733694   45116 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:47:40.108285   45116 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:47:40.719586   45116 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:47:41.020955   45116 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:47:41.022360   45116 kubeadm.go:319] 
	I1124 09:47:41.022508   45116 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:47:41.022527   45116 kubeadm.go:319] 
	I1124 09:47:41.022663   45116 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:47:41.022708   45116 kubeadm.go:319] 
	I1124 09:47:41.022785   45116 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:47:41.022895   45116 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:47:41.022965   45116 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:47:41.022971   45116 kubeadm.go:319] 
	I1124 09:47:41.023042   45116 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:47:41.023053   45116 kubeadm.go:319] 
	I1124 09:47:41.023138   45116 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:47:41.023173   45116 kubeadm.go:319] 
	I1124 09:47:41.023243   45116 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:47:41.023376   45116 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:47:41.023503   45116 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:47:41.023511   45116 kubeadm.go:319] 
	I1124 09:47:41.023613   45116 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:47:41.023742   45116 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:47:41.023754   45116 kubeadm.go:319] 
	I1124 09:47:41.023880   45116 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dkfba4.wiyzaabyuc92dy77 \
	I1124 09:47:41.024030   45116 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa \
	I1124 09:47:41.024076   45116 kubeadm.go:319] 	--control-plane 
	I1124 09:47:41.024086   45116 kubeadm.go:319] 
	I1124 09:47:41.024209   45116 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:47:41.024221   45116 kubeadm.go:319] 
	I1124 09:47:41.024326   45116 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dkfba4.wiyzaabyuc92dy77 \
	I1124 09:47:41.024467   45116 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa 
	I1124 09:47:41.027844   45116 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:47:41.027874   45116 cni.go:84] Creating CNI manager for ""
	I1124 09:47:41.027883   45116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:41.030429   45116 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:47:40.391437   49468 main.go:143] libmachine: domain no-preload-778378 has defined MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:40.392027   49468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fd:5d", ip: ""} in network mk-no-preload-778378: {Iface:virbr4 ExpiryTime:2025-11-24 10:47:32 +0000 UTC Type:0 Mac:52:54:00:8b:fd:5d Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-778378 Clientid:01:52:54:00:8b:fd:5d}
	I1124 09:47:40.392050   49468 main.go:143] libmachine: domain no-preload-778378 has defined IP address 192.168.72.119 and MAC address 52:54:00:8b:fd:5d in network mk-no-preload-778378
	I1124 09:47:40.392304   49468 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:40.399318   49468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:40.422415   49468 kubeadm.go:884] updating cluster {Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:40.422684   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:40.719627   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:41.018605   49468 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:47:41.322596   49468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:41.322687   49468 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:41.368278   49468 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:47:41.368310   49468 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:47:41.368379   49468 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:41.368397   49468 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.368411   49468 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.368431   49468 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.368455   49468 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:47:41.368474   49468 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.368493   49468 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.368549   49468 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:47:41.370299   49468 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.370479   49468 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.370657   49468 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:41.370799   49468 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.370907   49468 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.371101   49468 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:47:41.371232   49468 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:47:41.371319   49468 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.659265   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 09:47:41.661115   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.670704   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.675074   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.690818   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.695978   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.703708   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 09:47:41.947377   49468 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:47:41.947428   49468 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:47:41.947482   49468 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:47:41.947503   49468 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:41.947564   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947436   49468 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:41.947638   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947565   49468 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:47:41.947673   49468 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:41.947735   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947501   49468 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:41.947784   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:41.947390   49468 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:47:41.947848   49468 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:41.947879   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:42.030556   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:42.030593   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:42.030616   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:42.030558   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:42.030647   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:42.128639   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:42.128702   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:42.133613   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:42.133726   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:42.133867   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:42.224745   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:47:42.224756   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:47:42.236400   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:47:42.236519   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:47:42.246691   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:47:42.842156   49230 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:47:42.842256   49230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:47:42.842381   49230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:47:42.842524   49230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:47:42.842641   49230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:47:42.842711   49230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:47:42.844791   49230 out.go:252]   - Generating certificates and keys ...
	I1124 09:47:42.844876   49230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:47:42.844953   49230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:47:42.845069   49230 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:47:42.845174   49230 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:47:42.845276   49230 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:47:42.845344   49230 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:47:42.845414   49230 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:47:42.845573   49230 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-626350 localhost] and IPs [192.168.61.81 127.0.0.1 ::1]
	I1124 09:47:42.845642   49230 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:47:42.845811   49230 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-626350 localhost] and IPs [192.168.61.81 127.0.0.1 ::1]
	I1124 09:47:42.845903   49230 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:47:42.846005   49230 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:47:42.846065   49230 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:47:42.846193   49230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:47:42.846272   49230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:47:42.846365   49230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:47:42.846462   49230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:47:42.846561   49230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:47:42.846648   49230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:47:42.846774   49230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:47:42.846868   49230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:47:42.848841   49230 out.go:252]   - Booting up control plane ...
	I1124 09:47:42.848974   49230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:47:42.849105   49230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:47:42.849241   49230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:47:42.849429   49230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:47:42.849592   49230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:47:42.849801   49230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:47:42.849964   49230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:47:42.850017   49230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:47:42.850196   49230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:47:42.850370   49230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:47:42.850467   49230 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501804784s
	I1124 09:47:42.850617   49230 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:47:42.850735   49230 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.81:8443/livez
	I1124 09:47:42.850875   49230 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:47:42.851008   49230 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:47:42.851119   49230 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.542586482s
	I1124 09:47:42.851231   49230 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.485477469s
	I1124 09:47:42.851357   49230 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503199752s
	I1124 09:47:42.851501   49230 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:47:42.851657   49230 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:47:42.851729   49230 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:47:42.851942   49230 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-626350 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:47:42.852007   49230 kubeadm.go:319] [bootstrap-token] Using token: 10o6xo.r4t1k3a5ac1zo35l
	I1124 09:47:42.855290   49230 out.go:252]   - Configuring RBAC rules ...
	I1124 09:47:42.855388   49230 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:47:42.855463   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:47:42.855601   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:47:42.855713   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:47:42.855844   49230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:47:42.855965   49230 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:47:42.856117   49230 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:47:42.856187   49230 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:47:42.856235   49230 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:47:42.856241   49230 kubeadm.go:319] 
	I1124 09:47:42.856310   49230 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:47:42.856321   49230 kubeadm.go:319] 
	I1124 09:47:42.856417   49230 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:47:42.856424   49230 kubeadm.go:319] 
	I1124 09:47:42.856443   49230 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:47:42.856532   49230 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:47:42.856616   49230 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:47:42.856630   49230 kubeadm.go:319] 
	I1124 09:47:42.856704   49230 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:47:42.856718   49230 kubeadm.go:319] 
	I1124 09:47:42.856776   49230 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:47:42.856790   49230 kubeadm.go:319] 
	I1124 09:47:42.856857   49230 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:47:42.856957   49230 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:47:42.857062   49230 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:47:42.857071   49230 kubeadm.go:319] 
	I1124 09:47:42.857201   49230 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:47:42.857306   49230 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:47:42.857315   49230 kubeadm.go:319] 
	I1124 09:47:42.857421   49230 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 10o6xo.r4t1k3a5ac1zo35l \
	I1124 09:47:42.857560   49230 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa \
	I1124 09:47:42.857594   49230 kubeadm.go:319] 	--control-plane 
	I1124 09:47:42.857602   49230 kubeadm.go:319] 
	I1124 09:47:42.857667   49230 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:47:42.857674   49230 kubeadm.go:319] 
	I1124 09:47:42.857744   49230 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 10o6xo.r4t1k3a5ac1zo35l \
	I1124 09:47:42.857845   49230 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7daf9583192b9e20a080f43e2798d86f7cbf2e3982b15db39e5771afb92c1dfa 
	I1124 09:47:42.857857   49230 cni.go:84] Creating CNI manager for ""
	I1124 09:47:42.857865   49230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:42.859407   49230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 09:47:42.860712   49230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:47:42.875098   49230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:47:42.901014   49230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:47:42.901107   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:42.901145   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-626350 minikube.k8s.io/updated_at=2025_11_24T09_47_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-626350 minikube.k8s.io/primary=true
	I1124 09:47:42.961791   49230 ops.go:34] apiserver oom_adj: -16
	I1124 09:47:41.032326   45116 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 09:47:41.052917   45116 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 09:47:41.088139   45116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:47:41.088210   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:41.088292   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-377882 minikube.k8s.io/updated_at=2025_11_24T09_47_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=pause-377882 minikube.k8s.io/primary=true
	I1124 09:47:41.260728   45116 ops.go:34] apiserver oom_adj: -16
	I1124 09:47:41.260737   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:41.760896   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:42.260939   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:42.761682   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:43.261592   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:43.761513   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.260802   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.761638   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.260814   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.761485   45116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.859385   45116 kubeadm.go:1114] duration metric: took 4.771240887s to wait for elevateKubeSystemPrivileges
	I1124 09:47:45.859419   45116 kubeadm.go:403] duration metric: took 4m35.037363705s to StartCluster
	I1124 09:47:45.859439   45116 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:45.859538   45116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:45.860786   45116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:45.861117   45116 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:45.861342   45116 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:45.861474   45116 config.go:182] Loaded profile config "pause-377882": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:47:45.863498   45116 out.go:179] * Enabled addons: 
	I1124 09:47:45.863505   45116 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:42.318580   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.318696   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.329403   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:47:42.329527   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:47:42.341584   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:47:42.341594   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:42.341699   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:47:42.341807   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:42.352035   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:42.352094   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (exists)
	I1124 09:47:42.352117   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.352146   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:42.352175   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:47:42.353408   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (exists)
	I1124 09:47:42.355304   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (exists)
	I1124 09:47:42.355331   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.13.1 (exists)
	I1124 09:47:42.679799   49468 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:45.143630   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.791429216s)
	I1124 09:47:45.143660   49468 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.791498276s)
	I1124 09:47:45.143687   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (exists)
	I1124 09:47:45.143667   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:47:45.143713   49468 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.463882545s)
	I1124 09:47:45.143750   49468 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:47:45.143719   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:47:45.143782   49468 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:45.143824   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:47:45.143834   49468 ssh_runner.go:195] Run: which crictl
	I1124 09:47:43.080379   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:43.580822   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.081393   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:44.580960   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.081448   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:45.581018   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:46.080976   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:46.580763   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:47.081426   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:47.580758   49230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:47:47.692366   49230 kubeadm.go:1114] duration metric: took 4.791324343s to wait for elevateKubeSystemPrivileges
	I1124 09:47:47.692413   49230 kubeadm.go:403] duration metric: took 19.201859766s to StartCluster
	I1124 09:47:47.692438   49230 settings.go:142] acquiring lock: {Name:mk8c53451efff71ca8ccb056ba6e823b5a763735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:47.692533   49230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:47.694146   49230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:47.694433   49230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:47:47.694432   49230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.81 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:47.694518   49230 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:47.694709   49230 config.go:182] Loaded profile config "embed-certs-626350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:47:47.694749   49230 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-626350"
	I1124 09:47:47.694763   49230 addons.go:70] Setting default-storageclass=true in profile "embed-certs-626350"
	I1124 09:47:47.694777   49230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-626350"
	I1124 09:47:47.694778   49230 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-626350"
	I1124 09:47:47.694812   49230 host.go:66] Checking if "embed-certs-626350" exists ...
	I1124 09:47:47.696279   49230 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:47.697728   49230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:47.697913   49230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:47.699136   49230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:47.699153   49230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:47.699664   49230 addons.go:239] Setting addon default-storageclass=true in "embed-certs-626350"
	I1124 09:47:47.699708   49230 host.go:66] Checking if "embed-certs-626350" exists ...
	I1124 09:47:47.702071   49230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:47.702092   49230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:47.702479   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.703033   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:47.703075   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.703456   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:47.705068   49230 main.go:143] libmachine: domain embed-certs-626350 has defined MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.705631   49230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:fc:08", ip: ""} in network mk-embed-certs-626350: {Iface:virbr3 ExpiryTime:2025-11-24 10:47:15 +0000 UTC Type:0 Mac:52:54:00:21:fc:08 Iaid: IPaddr:192.168.61.81 Prefix:24 Hostname:embed-certs-626350 Clientid:01:52:54:00:21:fc:08}
	I1124 09:47:47.705668   49230 main.go:143] libmachine: domain embed-certs-626350 has defined IP address 192.168.61.81 and MAC address 52:54:00:21:fc:08 in network mk-embed-certs-626350
	I1124 09:47:47.705859   49230 sshutil.go:53] new ssh client: &{IP:192.168.61.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/embed-certs-626350/id_rsa Username:docker}
	I1124 09:47:48.001103   49230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:47:48.095048   49230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:48.215604   49230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:48.544411   49230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:48.863307   49230 start.go:977] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1124 09:47:48.864754   49230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-626350" to be "Ready" ...
	I1124 09:47:48.888999   49230 node_ready.go:49] node "embed-certs-626350" is "Ready"
	I1124 09:47:48.889033   49230 node_ready.go:38] duration metric: took 24.243987ms for node "embed-certs-626350" to be "Ready" ...
	I1124 09:47:48.889049   49230 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:48.889104   49230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:49.348078   49230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132429809s)
	I1124 09:47:49.348231   49230 api_server.go:72] duration metric: took 1.653769835s to wait for apiserver process to appear ...
	I1124 09:47:49.348254   49230 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:49.348279   49230 api_server.go:253] Checking apiserver healthz at https://192.168.61.81:8443/healthz ...
	I1124 09:47:49.386277   49230 api_server.go:279] https://192.168.61.81:8443/healthz returned 200:
	ok
	I1124 09:47:49.388026   49230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-626350" context rescaled to 1 replicas
	I1124 09:47:49.389837   49230 api_server.go:141] control plane version: v1.34.2
	I1124 09:47:49.389866   49230 api_server.go:131] duration metric: took 41.604231ms to wait for apiserver health ...
	I1124 09:47:49.389876   49230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:49.400036   49230 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:47:45.865101   45116 addons.go:530] duration metric: took 3.766128ms for enable addons: enabled=[]
	I1124 09:47:45.865172   45116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:46.084467   45116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:46.115070   45116 node_ready.go:35] waiting up to 6m0s for node "pause-377882" to be "Ready" ...
	I1124 09:47:46.128184   45116 node_ready.go:49] node "pause-377882" is "Ready"
	I1124 09:47:46.128215   45116 node_ready.go:38] duration metric: took 13.106973ms for node "pause-377882" to be "Ready" ...
	I1124 09:47:46.128231   45116 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:46.128285   45116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:46.156705   45116 api_server.go:72] duration metric: took 295.54891ms to wait for apiserver process to appear ...
	I1124 09:47:46.156743   45116 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:46.156767   45116 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I1124 09:47:46.165521   45116 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I1124 09:47:46.167551   45116 api_server.go:141] control plane version: v1.34.2
	I1124 09:47:46.167657   45116 api_server.go:131] duration metric: took 10.90363ms to wait for apiserver health ...
	I1124 09:47:46.167691   45116 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:47:46.176866   45116 system_pods.go:59] 4 kube-system pods found
	I1124 09:47:46.176906   45116 system_pods.go:61] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.176916   45116 system_pods.go:61] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.176925   45116 system_pods.go:61] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.176933   45116 system_pods.go:61] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.176942   45116 system_pods.go:74] duration metric: took 9.209601ms to wait for pod list to return data ...
	I1124 09:47:46.176952   45116 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:47:46.182616   45116 default_sa.go:45] found service account: "default"
	I1124 09:47:46.182646   45116 default_sa.go:55] duration metric: took 5.686595ms for default service account to be created ...
	I1124 09:47:46.182661   45116 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:47:46.272960   45116 system_pods.go:86] 4 kube-system pods found
	I1124 09:47:46.272991   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.273009   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.273016   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.273021   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.273062   45116 retry.go:31] will retry after 261.779199ms: missing components: kube-dns, kube-proxy
	I1124 09:47:46.547537   45116 system_pods.go:86] 5 kube-system pods found
	I1124 09:47:46.547572   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.547586   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.547595   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.547606   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:46.547612   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.547634   45116 retry.go:31] will retry after 284.613792ms: missing components: kube-dns, kube-proxy
	I1124 09:47:46.881884   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:46.881922   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:46.881933   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:46.881943   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:46.881952   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:46.881958   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:46.881965   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:46.881971   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:46.881990   45116 retry.go:31] will retry after 315.292616ms: missing components: kube-dns, kube-proxy
	I1124 09:47:47.208548   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:47.208592   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.208607   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.208620   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:47.208628   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:47.208640   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:47.208649   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:47.208659   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:47.208678   45116 retry.go:31] will retry after 507.727708ms: missing components: kube-dns, kube-proxy
	I1124 09:47:47.733665   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:47.733703   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.733724   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:47.733733   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:47.733740   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:47.733746   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:47.733756   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:47.733766   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:47.733794   45116 retry.go:31] will retry after 507.400196ms: missing components: kube-dns, kube-proxy
	I1124 09:47:48.246556   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:48.246607   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:48.246624   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:48.246637   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:48.246650   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:48.246658   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:48.246665   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Running
	I1124 09:47:48.246671   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:48.246691   45116 retry.go:31] will retry after 799.242365ms: missing components: kube-dns
	I1124 09:47:49.051374   45116 system_pods.go:86] 7 kube-system pods found
	I1124 09:47:49.051403   45116 system_pods.go:89] "coredns-66bc5c9577-fzcps" [9349d8e4-be24-4e97-bb02-f38fa659efba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.051411   45116 system_pods.go:89] "coredns-66bc5c9577-t7vnl" [3ff2e529-3c1f-431e-9199-bb2c04dbe874] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.051419   45116 system_pods.go:89] "etcd-pause-377882" [115e3ca6-9451-4da7-9a93-c8e90656f619] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.051423   45116 system_pods.go:89] "kube-apiserver-pause-377882" [f3febafd-683e-4b07-843d-67e6bbe39c12] Running
	I1124 09:47:49.051428   45116 system_pods.go:89] "kube-controller-manager-pause-377882" [65fc8990-4ffe-453d-9f47-7ed9cf2c5344] Running
	I1124 09:47:49.051432   45116 system_pods.go:89] "kube-proxy-c42hb" [2d8b2f63-dfd4-4493-a6dc-bbddad71f796] Running
	I1124 09:47:49.051436   45116 system_pods.go:89] "kube-scheduler-pause-377882" [fafc12bd-ab81-4d2d-95e5-a01f397b13e5] Running
	I1124 09:47:49.051446   45116 system_pods.go:126] duration metric: took 2.868778011s to wait for k8s-apps to be running ...
	I1124 09:47:49.051456   45116 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:47:49.051512   45116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:49.074813   45116 system_svc.go:56] duration metric: took 23.348137ms WaitForService to wait for kubelet
	I1124 09:47:49.074847   45116 kubeadm.go:587] duration metric: took 3.213697828s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:49.074863   45116 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:49.078784   45116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:49.078820   45116 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:49.078834   45116 node_conditions.go:105] duration metric: took 3.966011ms to run NodePressure ...
	I1124 09:47:49.078849   45116 start.go:242] waiting for startup goroutines ...
	I1124 09:47:49.078860   45116 start.go:247] waiting for cluster config update ...
	I1124 09:47:49.078872   45116 start.go:256] writing updated cluster config ...
	I1124 09:47:49.079272   45116 ssh_runner.go:195] Run: rm -f paused
	I1124 09:47:49.084914   45116 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:49.086200   45116 kapi.go:59] client config for pause-377882: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/profiles/pause-377882/client.key", CAFile:"/home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:49.090408   45116 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fzcps" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:47.297860   49468 ssh_runner.go:235] Completed: which crictl: (2.154005231s)
	I1124 09:47:47.297920   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (2.154068188s)
	I1124 09:47:47.297941   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:47.297946   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:47:47.297973   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:47.298019   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:47:48.873034   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.574992591s)
	I1124 09:47:48.873075   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:47:48.873102   49468 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:47:48.873155   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:47:48.873223   49468 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.575268548s)
	I1124 09:47:48.873287   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:50.770204   49468 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.896891782s)
	I1124 09:47:50.770288   49468 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:50.770310   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.897126269s)
	I1124 09:47:50.770325   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:47:50.770353   49468 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:50.770385   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:47:51.835648   49468 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.065240655s)
	I1124 09:47:51.835686   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:47:51.835689   49468 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.065377432s)
	I1124 09:47:51.835741   49468 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:47:51.835843   49468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:47:51.841904   49468 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1124 09:47:51.841930   49468 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:47:51.841970   49468 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:47:49.401277   49230 addons.go:530] duration metric: took 1.706757046s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:47:49.406238   49230 system_pods.go:59] 8 kube-system pods found
	I1124 09:47:49.406288   49230 system_pods.go:61] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.406314   49230 system_pods.go:61] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.406324   49230 system_pods.go:61] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.406334   49230 system_pods.go:61] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:49.406343   49230 system_pods.go:61] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:49.406355   49230 system_pods.go:61] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:49.406363   49230 system_pods.go:61] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:49.406370   49230 system_pods.go:61] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending
	I1124 09:47:49.406378   49230 system_pods.go:74] duration metric: took 16.495786ms to wait for pod list to return data ...
	I1124 09:47:49.406390   49230 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:47:49.417315   49230 default_sa.go:45] found service account: "default"
	I1124 09:47:49.417356   49230 default_sa.go:55] duration metric: took 10.955817ms for default service account to be created ...
	I1124 09:47:49.417370   49230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:47:49.426259   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:49.426297   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.426322   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.426332   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.426339   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:49.426352   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:49.426364   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:47:49.426375   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:49.426387   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:49.426412   49230 retry.go:31] will retry after 264.613707ms: missing components: kube-dns, kube-proxy
	I1124 09:47:49.715652   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:49.715688   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.715699   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:49.715708   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:49.715725   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:49.715743   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:49.715749   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:49.715758   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:49.715770   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:49.715791   49230 retry.go:31] will retry after 350.158621ms: missing components: kube-dns
	I1124 09:47:50.071130   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:50.071186   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.071219   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.071244   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:50.071252   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:50.071263   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:50.071279   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:50.071291   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:50.071299   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:50.071318   49230 retry.go:31] will retry after 375.919932ms: missing components: kube-dns
	I1124 09:47:50.451626   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:50.451663   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.451674   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:50.451689   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:50.451697   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:50.451707   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:47:50.451720   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:50.451733   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:50.451742   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:47:50.451767   49230 retry.go:31] will retry after 606.729657ms: missing components: kube-dns
	I1124 09:47:51.064644   49230 system_pods.go:86] 8 kube-system pods found
	I1124 09:47:51.064682   49230 system_pods.go:89] "coredns-66bc5c9577-g85rx" [386288f0-71ea-4c13-9384-ff15a126424c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:51.064690   49230 system_pods.go:89] "coredns-66bc5c9577-l484d" [00113171-e293-49ab-9a13-a540d6734c55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:47:51.064697   49230 system_pods.go:89] "etcd-embed-certs-626350" [2aa5233a-21c3-4f30-9f10-1b170dbf2811] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:47:51.064701   49230 system_pods.go:89] "kube-apiserver-embed-certs-626350" [fab1cad0-d07c-4f91-acf7-2c126f0fd47a] Running
	I1124 09:47:51.064709   49230 system_pods.go:89] "kube-controller-manager-embed-certs-626350" [2bbff51f-0dee-4e4d-a3e2-9abd7e39e96c] Running
	I1124 09:47:51.064713   49230 system_pods.go:89] "kube-proxy-qc9w6" [9d2d3702-f974-4c83-8a9b-4ca173395460] Running
	I1124 09:47:51.064718   49230 system_pods.go:89] "kube-scheduler-embed-certs-626350" [ecfccb77-6b5d-4d0e-a287-2c7a351dbb1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:47:51.064721   49230 system_pods.go:89] "storage-provisioner" [a525f31e-3ad8-4ab4-94ac-a3d6437b32bc] Running
	I1124 09:47:51.064730   49230 system_pods.go:126] duration metric: took 1.6473536s to wait for k8s-apps to be running ...
	I1124 09:47:51.064749   49230 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:47:51.064794   49230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:47:51.083878   49230 system_svc.go:56] duration metric: took 19.11715ms WaitForService to wait for kubelet
	I1124 09:47:51.083909   49230 kubeadm.go:587] duration metric: took 3.389449545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:47:51.083931   49230 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:47:51.087934   49230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 09:47:51.087962   49230 node_conditions.go:123] node cpu capacity is 2
	I1124 09:47:51.087977   49230 node_conditions.go:105] duration metric: took 4.040359ms to run NodePressure ...
	I1124 09:47:51.087994   49230 start.go:242] waiting for startup goroutines ...
	I1124 09:47:51.088007   49230 start.go:247] waiting for cluster config update ...
	I1124 09:47:51.088021   49230 start.go:256] writing updated cluster config ...
	I1124 09:47:51.088385   49230 ssh_runner.go:195] Run: rm -f paused
	I1124 09:47:51.094717   49230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:51.099199   49230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g85rx" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:47:51.096384   45116 pod_ready.go:104] pod "coredns-66bc5c9577-fzcps" is not "Ready", error: <nil>
	W1124 09:47:53.099368   45116 pod_ready.go:104] pod "coredns-66bc5c9577-fzcps" is not "Ready", error: <nil>
	I1124 09:47:54.598857   45116 pod_ready.go:94] pod "coredns-66bc5c9577-fzcps" is "Ready"
	I1124 09:47:54.598881   45116 pod_ready.go:86] duration metric: took 5.508446462s for pod "coredns-66bc5c9577-fzcps" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.598893   45116 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t7vnl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.603960   45116 pod_ready.go:94] pod "coredns-66bc5c9577-t7vnl" is "Ready"
	I1124 09:47:54.603987   45116 pod_ready.go:86] duration metric: took 5.086956ms for pod "coredns-66bc5c9577-t7vnl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.609473   45116 pod_ready.go:83] waiting for pod "etcd-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.615781   45116 pod_ready.go:94] pod "etcd-pause-377882" is "Ready"
	I1124 09:47:54.615808   45116 pod_ready.go:86] duration metric: took 6.310805ms for pod "etcd-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.617933   45116 pod_ready.go:83] waiting for pod "kube-apiserver-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.623895   45116 pod_ready.go:94] pod "kube-apiserver-pause-377882" is "Ready"
	I1124 09:47:54.623915   45116 pod_ready.go:86] duration metric: took 5.95122ms for pod "kube-apiserver-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:54.796235   45116 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.197072   45116 pod_ready.go:94] pod "kube-controller-manager-pause-377882" is "Ready"
	I1124 09:47:55.197111   45116 pod_ready.go:86] duration metric: took 400.844861ms for pod "kube-controller-manager-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.396347   45116 pod_ready.go:83] waiting for pod "kube-proxy-c42hb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.794688   45116 pod_ready.go:94] pod "kube-proxy-c42hb" is "Ready"
	I1124 09:47:55.794716   45116 pod_ready.go:86] duration metric: took 398.334501ms for pod "kube-proxy-c42hb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:55.995859   45116 pod_ready.go:83] waiting for pod "kube-scheduler-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:56.396431   45116 pod_ready.go:94] pod "kube-scheduler-pause-377882" is "Ready"
	I1124 09:47:56.396467   45116 pod_ready.go:86] duration metric: took 400.583645ms for pod "kube-scheduler-pause-377882" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:47:56.396485   45116 pod_ready.go:40] duration metric: took 7.311533621s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:47:56.462498   45116 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:47:56.464714   45116 out.go:179] * Done! kubectl is now configured to use "pause-377882" cluster and "default" namespace by default
	I1124 09:47:52.599040   49468 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-5665/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:47:52.599094   49468 cache_images.go:125] Successfully loaded all cached images
	I1124 09:47:52.599102   49468 cache_images.go:94] duration metric: took 11.230777244s to LoadCachedImages
	I1124 09:47:52.599116   49468 kubeadm.go:935] updating node { 192.168.72.119 8443 v1.35.0-beta.0 crio true true} ...
	I1124 09:47:52.599234   49468 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-778378 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-778378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:52.599325   49468 ssh_runner.go:195] Run: crio config
	I1124 09:47:52.649825   49468 cni.go:84] Creating CNI manager for ""
	I1124 09:47:52.649863   49468 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 09:47:52.649888   49468 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:52.649922   49468 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778378 NodeName:no-preload-778378 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:52.650115   49468 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778378"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.119"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:52.650222   49468 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:47:52.666805   49468 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:52.666874   49468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:52.679386   49468 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1124 09:47:52.703303   49468 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:47:52.725434   49468 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1124 09:47:52.747717   49468 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:52.751798   49468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:47:52.767761   49468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:52.912445   49468 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:52.947232   49468 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378 for IP: 192.168.72.119
	I1124 09:47:52.947258   49468 certs.go:195] generating shared ca certs ...
	I1124 09:47:52.947281   49468 certs.go:227] acquiring lock for ca certs: {Name:mkc847d4fb6fb61872e24a1bb00356ff9ef1a409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:52.947489   49468 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key
	I1124 09:47:52.947556   49468 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key
	I1124 09:47:52.947573   49468 certs.go:257] generating profile certs ...
	I1124 09:47:52.947702   49468 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.key
	I1124 09:47:52.947790   49468 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/apiserver.key.a1f5695a
	I1124 09:47:52.947827   49468 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/proxy-client.key
	I1124 09:47:52.947979   49468 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem (1338 bytes)
	W1124 09:47:52.948023   49468 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:52.948038   49468 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 09:47:52.948073   49468 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:52.948125   49468 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:52.948172   49468 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:52.948233   49468 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem (1708 bytes)
	I1124 09:47:52.948974   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:52.985717   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:53.020827   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:53.054109   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:53.088348   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:47:53.129387   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:47:53.160454   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:53.191133   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:47:53.221476   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:53.251686   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/certs/9629.pem --> /usr/share/ca-certificates/9629.pem (1338 bytes)
	I1124 09:47:53.281821   49468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/ssl/certs/96292.pem --> /usr/share/ca-certificates/96292.pem (1708 bytes)
	I1124 09:47:53.312011   49468 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:53.336795   49468 ssh_runner.go:195] Run: openssl version
	I1124 09:47:53.343850   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:53.357423   49468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:53.362473   49468 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:53.362541   49468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:53.369846   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:53.382988   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9629.pem && ln -fs /usr/share/ca-certificates/9629.pem /etc/ssl/certs/9629.pem"
	I1124 09:47:53.397505   49468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9629.pem
	I1124 09:47:53.402623   49468 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:42 /usr/share/ca-certificates/9629.pem
	I1124 09:47:53.402686   49468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9629.pem
	I1124 09:47:53.410195   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9629.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:53.425060   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96292.pem && ln -fs /usr/share/ca-certificates/96292.pem /etc/ssl/certs/96292.pem"
	I1124 09:47:53.439759   49468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96292.pem
	I1124 09:47:53.445541   49468 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:42 /usr/share/ca-certificates/96292.pem
	I1124 09:47:53.445615   49468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96292.pem
	I1124 09:47:53.453844   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:53.467049   49468 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:53.472306   49468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:47:53.480180   49468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:47:53.487094   49468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:47:53.494835   49468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:47:53.502119   49468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:47:53.510050   49468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:47:53.517697   49468 kubeadm.go:401] StartCluster: {Name:no-preload-778378 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:no-preload-778378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:53.517800   49468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:53.517857   49468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:53.553354   49468 cri.go:89] found id: ""
	I1124 09:47:53.553425   49468 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:53.566351   49468 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:47:53.566371   49468 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:47:53.566415   49468 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:47:53.578640   49468 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:47:53.579645   49468 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-778378" does not appear in /home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:47:53.580298   49468 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-5665/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-778378" cluster setting kubeconfig missing "no-preload-778378" context setting]
	I1124 09:47:53.581229   49468 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/kubeconfig: {Name:mk0d9546aa57c72914bf0016eef3f2352898c1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:53.582854   49468 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:47:53.596270   49468 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.119
	I1124 09:47:53.596304   49468 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:47:53.596317   49468 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:47:53.596376   49468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:53.630574   49468 cri.go:89] found id: ""
	I1124 09:47:53.630646   49468 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:47:53.650680   49468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:47:53.665153   49468 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:47:53.665200   49468 kubeadm.go:158] found existing configuration files:
	
	I1124 09:47:53.665259   49468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:47:53.677486   49468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:47:53.677568   49468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:47:53.689373   49468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:47:53.701438   49468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:47:53.701504   49468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:47:53.714145   49468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:47:53.725775   49468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:47:53.725830   49468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:47:53.737876   49468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:47:53.748922   49468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:47:53.748983   49468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:47:53.761227   49468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:47:53.774017   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:53.904315   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:54.871563   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:55.161274   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:55.249877   49468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:47:55.366508   49468 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:47:55.366598   49468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:55.866735   49468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:56.366962   49468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:56.867628   49468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:47:56.915968   49468 api_server.go:72] duration metric: took 1.549470076s to wait for apiserver process to appear ...
	I1124 09:47:56.915995   49468 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:47:56.916016   49468 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	W1124 09:47:53.105342   49230 pod_ready.go:104] pod "coredns-66bc5c9577-g85rx" is not "Ready", error: <nil>
	W1124 09:47:55.106205   49230 pod_ready.go:104] pod "coredns-66bc5c9577-g85rx" is not "Ready", error: <nil>
	W1124 09:47:57.111893   49230 pod_ready.go:104] pod "coredns-66bc5c9577-g85rx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.247961597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977679247934604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6f9efcc-16ab-423c-b71b-b3aaaca5bde5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.249290299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d7a1bad-6ef9-411b-9738-a566cbab4d0f name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.249599288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d7a1bad-6ef9-411b-9738-a566cbab4d0f name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.249953016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d7a1bad-6ef9-411b-9738-a566cbab4d0f name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.293199883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f73be6e-a957-4d33-96e7-7b34e0e44b2b name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.293290132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f73be6e-a957-4d33-96e7-7b34e0e44b2b name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.294514974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a2fde62-3b60-4102-9d31-6c622740cb1f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.295149321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977679295127123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a2fde62-3b60-4102-9d31-6c622740cb1f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.296241301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=326edab9-e418-4b12-8a77-130d5995574c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.296505696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=326edab9-e418-4b12-8a77-130d5995574c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.296877955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=326edab9-e418-4b12-8a77-130d5995574c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.338855228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71d616e6-637e-4054-affc-3c0f906ed95c name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.339314558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71d616e6-637e-4054-affc-3c0f906ed95c name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.341855605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de7163f7-c2e0-4076-8bcc-3ad2c272df5c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.342846701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977679342812501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de7163f7-c2e0-4076-8bcc-3ad2c272df5c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.343931777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db5baecb-822e-4457-97e5-d0fb85aa9a6c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.344003058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db5baecb-822e-4457-97e5-d0fb85aa9a6c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.344593129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db5baecb-822e-4457-97e5-d0fb85aa9a6c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.382044656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7aaa3b25-7779-4096-b06d-41c971950fb0 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.382117289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7aaa3b25-7779-4096-b06d-41c971950fb0 name=/runtime.v1.RuntimeService/Version
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.383857655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89c375d6-5caf-4eec-9bc7-6713968930e5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.384447362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763977679384410893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89c375d6-5caf-4eec-9bc7-6713968930e5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.385293924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3652b261-8e02-4e9d-8736-1bde975301f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.385382549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3652b261-8e02-4e9d-8736-1bde975301f6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 09:47:59 pause-377882 crio[3411]: time="2025-11-24 09:47:59.385543677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e,PodSandboxId:9b5f30a85369be23c0bc9f531a17801954baa6b7dc59fc7e021f0e6a2ef7741c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667751194422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t7vnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff2e529-3c1f-431e-9199-bb2c04dbe874,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7,PodSandboxId:4a4bd4f481b91d71d50b76ed63c7d18933f00c4d1a8fa94e22358d41df9ea46a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763977667667599786,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fzcps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9349d8e4-be24-4e97-bb02-f38fa659efba,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566,PodSandboxId:610a405264659547102d646bb2b2fb746cd4365d93fded7d14268da93cac580b,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1763977667089339826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c42hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8b2f63-dfd4-4493-a6dc-bbddad71f796,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40,PodSandboxId:491cdd1dfdcd59132be88cc7582207277408b537b3eb6b2e335dbb2e4fb43d2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},I
mage:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1763977653460783888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5349f02f70d317406037741f5c304ab0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a,PodSandboxId:bef1e65b3d3d2a5e242a56bd90a150a15aacdd57db1
1bdb560423d9d7d295dda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1763977653449291958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e6d4f46fb8c77f88ba12dba4ff4ad5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a2c5a8afdd30b03a
db3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77,PodSandboxId:bc2b410e495eac570a7fc08ed176afb8de6043f357a39bb7ed4afa9f4506aa0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1763977653406626221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f09fc273d07aae075f917a079c43fe,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1,PodSandboxId:eca054a4811a347bbc02efa265badbb330fae43a8aed2c4aa597caad73c911cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1763977653314106629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-377882,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0678ba0f90a992e961b1a0f9252e1f0f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3652b261-8e02-4e9d-8736-1bde975301f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	59a294b884984       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   9b5f30a85369b       coredns-66bc5c9577-t7vnl               kube-system
	c17abfeac8a0d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   4a4bd4f481b91       coredns-66bc5c9577-fzcps               kube-system
	686ee6b9ed43a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   12 seconds ago      Running             kube-proxy                0                   610a405264659       kube-proxy-c42hb                       kube-system
	3cdb365df9dd6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   26 seconds ago      Running             etcd                      4                   491cdd1dfdcd5       etcd-pause-377882                      kube-system
	ac6118758ef1f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   26 seconds ago      Running             kube-apiserver            1                   bef1e65b3d3d2       kube-apiserver-pause-377882            kube-system
	e7a2c5a8afdd3       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   26 seconds ago      Running             kube-scheduler            4                   bc2b410e495ea       kube-scheduler-pause-377882            kube-system
	fc72a91df2fc4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   26 seconds ago      Running             kube-controller-manager   8                   eca054a4811a3       kube-controller-manager-pause-377882   kube-system
	
	
	==> coredns [59a294b88498437ffa3d7fe002cded653a9b9ee4329fae789d0dbca0d858c34e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [c17abfeac8a0dbbea70e8feff333a6a7f2c07b283543ed1b96aa2edf7f7796a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               pause-377882
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-377882
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=pause-377882
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_47_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:47:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-377882
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:47:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:47:50 +0000   Mon, 24 Nov 2025 09:47:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    pause-377882
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5575a5935f24e97a9af69e7eb2c61b2
	  System UUID:                e5575a59-35f2-4e97-a9af-69e7eb2c61b2
	  Boot ID:                    349b1732-18e8-44df-80c0-d067b057d1c9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fzcps                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     13s
	  kube-system                 coredns-66bc5c9577-t7vnl                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     13s
	  kube-system                 etcd-pause-377882                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         19s
	  kube-system                 kube-apiserver-pause-377882             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-controller-manager-pause-377882    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-proxy-c42hb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-scheduler-pause-377882             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-377882 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-377882 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-377882 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19s                kubelet          Node pause-377882 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s                kubelet          Node pause-377882 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s                kubelet          Node pause-377882 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node pause-377882 event: Registered Node pause-377882 in Controller
	
	
	==> dmesg <==
	[  +0.001585] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002838] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.264832] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.121412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.124814] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.107066] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.193049] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000450] kauditd_printk_skb: 19 callbacks suppressed
	[Nov24 09:41] kauditd_printk_skb: 224 callbacks suppressed
	[ +31.618388] kauditd_printk_skb: 38 callbacks suppressed
	[Nov24 09:43] kauditd_printk_skb: 261 callbacks suppressed
	[  +2.352837] kauditd_printk_skb: 171 callbacks suppressed
	[  +7.638545] kauditd_printk_skb: 47 callbacks suppressed
	[ +13.540927] kauditd_printk_skb: 70 callbacks suppressed
	[Nov24 09:44] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.604735] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 09:45] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 09:47] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.725524] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.134109] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.209350] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.767250] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.416445] kauditd_printk_skb: 140 callbacks suppressed
	
	
	==> etcd [3cdb365df9dd6ccfe0c8dcb4487b0be8281e055cb97d5cbcb4c2e0bd1c8ccf40] <==
	{"level":"warn","ts":"2025-11-24T09:47:36.021097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.049597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.052251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.064003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.079843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.097871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.112693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.129342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.158539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.162964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.181010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.189287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.201278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.212143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.233861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.239559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.253505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.270561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.284914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.291631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:36.396417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:47:40.095065Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.4583ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16507270112499124295 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/roles/kube-public/kubeadm:bootstrap-signer-clusterinfo\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-public/kubeadm:bootstrap-signer-clusterinfo\" value_size:284 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:47:40.095196Z","caller":"traceutil/trace.go:172","msg":"trace[1848518674] transaction","detail":"{read_only:false; response_revision:259; number_of_response:1; }","duration":"282.572659ms","start":"2025-11-24T09:47:39.812609Z","end":"2025-11-24T09:47:40.095182Z","steps":["trace[1848518674] 'process raft request'  (duration: 64.461978ms)","trace[1848518674] 'compare'  (duration: 217.24139ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:47:40.637477Z","caller":"traceutil/trace.go:172","msg":"trace[167518157] transaction","detail":"{read_only:false; response_revision:266; number_of_response:1; }","duration":"126.849819ms","start":"2025-11-24T09:47:40.510614Z","end":"2025-11-24T09:47:40.637464Z","steps":["trace[167518157] 'process raft request'  (duration: 126.25887ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:47:40.637622Z","caller":"traceutil/trace.go:172","msg":"trace[1803810617] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"143.568056ms","start":"2025-11-24T09:47:40.494037Z","end":"2025-11-24T09:47:40.637605Z","steps":["trace[1803810617] 'process raft request'  (duration: 105.954011ms)","trace[1803810617] 'compare'  (duration: 35.876493ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:47:59 up 7 min,  0 users,  load average: 0.51, 0.42, 0.23
	Linux pause-377882 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ac6118758ef1f973c9564d1758679a3e59f11d42e92725395f447aeb4beae68a] <==
	I1124 09:47:37.329941       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:47:37.335351       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 09:47:37.336318       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:47:37.341421       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:37.344182       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:47:37.418501       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:37.428012       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:47:37.438869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:47:38.137646       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:47:38.143980       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:47:38.144081       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:47:39.230265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:47:39.306067       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:47:39.472892       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:47:39.486788       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.144]
	I1124 09:47:39.487985       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:47:39.496184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:47:40.341551       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:47:40.672435       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:47:40.711934       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:47:40.733997       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:47:45.989814       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:47:46.236451       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:46.245197       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:47:46.283852       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fc72a91df2fc4f6dc073c8014f1c240e687109834ed331987f4a4dbe97e94eb1] <==
	I1124 09:47:45.349010       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 09:47:45.354949       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 09:47:45.356796       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 09:47:45.356886       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 09:47:45.356917       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 09:47:45.356934       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 09:47:45.357326       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:47:45.363858       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:47:45.367832       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:47:45.371889       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:47:45.381808       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 09:47:45.381852       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 09:47:45.381866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 09:47:45.381820       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:47:45.381937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:47:45.382249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:47:45.382287       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 09:47:45.383839       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:47:45.384886       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:47:45.385008       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:47:45.385295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:47:45.385388       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:47:45.387239       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:47:45.389108       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:47:45.390179       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-377882" podCIDRs=["10.244.0.0/24"]
	
	
	==> kube-proxy [686ee6b9ed43a5893a797ee5d0adfc761f0dd86e33527ff3e80737dd1a910566] <==
	I1124 09:47:47.441082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:47:47.541264       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:47:47.546898       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.144"]
	E1124 09:47:47.547101       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:47:47.705554       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 09:47:47.705654       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 09:47:47.705688       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:47:47.756293       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:47:47.756887       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:47:47.756903       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:47:47.763210       1 config.go:200] "Starting service config controller"
	I1124 09:47:47.763902       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:47:47.764111       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:47:47.766582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:47:47.764127       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:47:47.766785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:47:47.775665       1 config.go:309] "Starting node config controller"
	I1124 09:47:47.778989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:47:47.779002       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:47:47.868293       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:47:47.868329       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:47:47.868357       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e7a2c5a8afdd30b03adb3d054c40967b42d8c9b5dcd005c36076f195b4d5bf77] <==
	E1124 09:47:37.385829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:47:37.385832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:47:37.386079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:47:37.386044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:47:37.385998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:47:37.386185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:47:37.386385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:47:38.191894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:47:38.197806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:47:38.294301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:47:38.350340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:47:38.451555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:47:38.465646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:47:38.486241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:47:38.554016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:47:38.636036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:47:38.666634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:47:38.673185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:47:38.748395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:47:38.789545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:47:38.791750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:47:38.808572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:47:38.809019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:47:38.811377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1124 09:47:41.048211       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.556177   13719 apiserver.go:52] "Watching apiserver"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.591966   13719 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.635866   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-377882" podStartSLOduration=1.6358439630000001 podStartE2EDuration="1.635843963s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.616678422 +0000 UTC m=+1.167262292" watchObservedRunningTime="2025-11-24 09:47:41.635843963 +0000 UTC m=+1.186427828"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.651598   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-377882" podStartSLOduration=1.6515836 podStartE2EDuration="1.6515836s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.638323086 +0000 UTC m=+1.188906968" watchObservedRunningTime="2025-11-24 09:47:41.6515836 +0000 UTC m=+1.202167471"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.667640   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-377882" podStartSLOduration=1.667622019 podStartE2EDuration="1.667622019s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.651950233 +0000 UTC m=+1.202534106" watchObservedRunningTime="2025-11-24 09:47:41.667622019 +0000 UTC m=+1.218205873"
	Nov 24 09:47:41 pause-377882 kubelet[13719]: I1124 09:47:41.667870   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-377882" podStartSLOduration=1.667860601 podStartE2EDuration="1.667860601s" podCreationTimestamp="2025-11-24 09:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.666119355 +0000 UTC m=+1.216703226" watchObservedRunningTime="2025-11-24 09:47:41.667860601 +0000 UTC m=+1.218444471"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426841   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-lib-modules\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426922   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-kube-proxy\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426960   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-xtables-lock\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.426986   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dgv\" (UniqueName: \"kubernetes.io/projected/2d8b2f63-dfd4-4493-a6dc-bbddad71f796-kube-api-access-79dgv\") pod \"kube-proxy-c42hb\" (UID: \"2d8b2f63-dfd4-4493-a6dc-bbddad71f796\") " pod="kube-system/kube-proxy-c42hb"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734266   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nzt5\" (UniqueName: \"kubernetes.io/projected/3ff2e529-3c1f-431e-9199-bb2c04dbe874-kube-api-access-7nzt5\") pod \"coredns-66bc5c9577-t7vnl\" (UID: \"3ff2e529-3c1f-431e-9199-bb2c04dbe874\") " pod="kube-system/coredns-66bc5c9577-t7vnl"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734775   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9349d8e4-be24-4e97-bb02-f38fa659efba-config-volume\") pod \"coredns-66bc5c9577-fzcps\" (UID: \"9349d8e4-be24-4e97-bb02-f38fa659efba\") " pod="kube-system/coredns-66bc5c9577-fzcps"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734831   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcr9\" (UniqueName: \"kubernetes.io/projected/9349d8e4-be24-4e97-bb02-f38fa659efba-kube-api-access-ktcr9\") pod \"coredns-66bc5c9577-fzcps\" (UID: \"9349d8e4-be24-4e97-bb02-f38fa659efba\") " pod="kube-system/coredns-66bc5c9577-fzcps"
	Nov 24 09:47:46 pause-377882 kubelet[13719]: I1124 09:47:46.734869   13719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ff2e529-3c1f-431e-9199-bb2c04dbe874-config-volume\") pod \"coredns-66bc5c9577-t7vnl\" (UID: \"3ff2e529-3c1f-431e-9199-bb2c04dbe874\") " pod="kube-system/coredns-66bc5c9577-t7vnl"
	Nov 24 09:47:48 pause-377882 kubelet[13719]: I1124 09:47:48.816355   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c42hb" podStartSLOduration=2.81633057 podStartE2EDuration="2.81633057s" podCreationTimestamp="2025-11-24 09:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:47.833599322 +0000 UTC m=+7.384183197" watchObservedRunningTime="2025-11-24 09:47:48.81633057 +0000 UTC m=+8.366914443"
	Nov 24 09:47:48 pause-377882 kubelet[13719]: I1124 09:47:48.849800   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t7vnl" podStartSLOduration=2.84978381 podStartE2EDuration="2.84978381s" podCreationTimestamp="2025-11-24 09:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:48.819221372 +0000 UTC m=+8.369805244" watchObservedRunningTime="2025-11-24 09:47:48.84978381 +0000 UTC m=+8.400367682"
	Nov 24 09:47:49 pause-377882 kubelet[13719]: I1124 09:47:49.805365   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:47:49 pause-377882 kubelet[13719]: I1124 09:47:49.805813   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: E1124 09:47:50.686171   13719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763977670685802619 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: E1124 09:47:50.686198   13719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763977670685802619 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: I1124 09:47:50.822680   13719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fzcps" podStartSLOduration=4.8226606929999996 podStartE2EDuration="4.822660693s" podCreationTimestamp="2025-11-24 09:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:48.851385297 +0000 UTC m=+8.401969170" watchObservedRunningTime="2025-11-24 09:47:50.822660693 +0000 UTC m=+10.373244563"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: I1124 09:47:50.968151   13719 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:47:50 pause-377882 kubelet[13719]: I1124 09:47:50.970224   13719 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:47:51 pause-377882 kubelet[13719]: I1124 09:47:51.706586   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 09:47:54 pause-377882 kubelet[13719]: I1124 09:47:54.125021   13719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-377882 -n pause-377882
helpers_test.go:269: (dbg) Run:  kubectl --context pause-377882 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (415.86s)

                                                
                                    

Test pass (375/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.41
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 10.18
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 12.61
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0.84
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.65
31 TestOffline 85.81
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 133.85
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 10.53
44 TestAddons/parallel/Registry 20.73
45 TestAddons/parallel/RegistryCreds 0.81
47 TestAddons/parallel/InspektorGadget 11.79
48 TestAddons/parallel/MetricsServer 6.21
50 TestAddons/parallel/CSI 36.75
51 TestAddons/parallel/Headlamp 20.14
52 TestAddons/parallel/CloudSpanner 6.63
53 TestAddons/parallel/LocalPath 55.62
54 TestAddons/parallel/NvidiaDevicePlugin 6.8
55 TestAddons/parallel/Yakd 11.95
57 TestAddons/StoppedEnableDisable 84.35
58 TestCertOptions 60.84
59 TestCertExpiration 295.46
61 TestForceSystemdFlag 53.23
62 TestForceSystemdEnv 51.31
67 TestErrorSpam/setup 39.42
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.65
70 TestErrorSpam/pause 1.49
71 TestErrorSpam/unpause 1.78
72 TestErrorSpam/stop 86.56
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.6
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 62.31
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 4.03
84 TestFunctional/serial/CacheCmd/cache/add_local 2.13
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 42.19
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.26
95 TestFunctional/serial/LogsFileCmd 1.3
96 TestFunctional/serial/InvalidService 4.34
98 TestFunctional/parallel/ConfigCmd 0.42
99 TestFunctional/parallel/DashboardCmd 13.24
100 TestFunctional/parallel/DryRun 0.22
101 TestFunctional/parallel/InternationalLanguage 0.11
102 TestFunctional/parallel/StatusCmd 0.99
106 TestFunctional/parallel/ServiceCmdConnect 21.55
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 44.72
110 TestFunctional/parallel/SSHCmd 0.31
111 TestFunctional/parallel/CpCmd 1.25
112 TestFunctional/parallel/MySQL 22.56
113 TestFunctional/parallel/FileSync 0.22
114 TestFunctional/parallel/CertSync 1.21
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
122 TestFunctional/parallel/License 0.33
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
126 TestFunctional/parallel/ServiceCmd/DeployApp 21.24
136 TestFunctional/parallel/ServiceCmd/List 0.4
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.4
140 TestFunctional/parallel/Version/short 0.07
141 TestFunctional/parallel/Version/components 0.68
142 TestFunctional/parallel/ServiceCmd/URL 0.31
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
144 TestFunctional/parallel/MountCmd/any-port 9.01
145 TestFunctional/parallel/ProfileCmd/profile_list 0.33
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
150 TestFunctional/parallel/ImageCommands/ImageBuild 6.4
151 TestFunctional/parallel/ImageCommands/Setup 1.78
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.81
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.03
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.83
160 TestFunctional/parallel/MountCmd/specific-port 1.53
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 87.29
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 56.42
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.46
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.79
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 39.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.27
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.25
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.42
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.4
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 25.9
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.24
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.64
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.58
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.3
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.11
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.23
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.17
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.49
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.2
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.3
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.31
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.35
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.95
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.21
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.21
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.22
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.25
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.31
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.42
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.63
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.18
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.18
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.18
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.18
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.54
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.84
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 4.08
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.45
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.84
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.6
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.58
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 3.04
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.52
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 231.04
262 TestMultiControlPlane/serial/DeployApp 7.14
263 TestMultiControlPlane/serial/PingHostFromPods 1.26
264 TestMultiControlPlane/serial/AddWorkerNode 44.43
265 TestMultiControlPlane/serial/NodeLabels 0.08
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
267 TestMultiControlPlane/serial/CopyFile 10.49
268 TestMultiControlPlane/serial/StopSecondaryNode 74.03
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.5
270 TestMultiControlPlane/serial/RestartSecondaryNode 35.59
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.69
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 349.18
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.06
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 243.34
276 TestMultiControlPlane/serial/RestartCluster 100.7
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
278 TestMultiControlPlane/serial/AddSecondaryNode 78.14
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
284 TestJSONOutput/start/Command 84.38
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.6
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.84
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 84.7
316 TestMountStart/serial/StartWithMountFirst 19.3
317 TestMountStart/serial/VerifyMountFirst 0.3
318 TestMountStart/serial/StartWithMountSecond 19.64
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.68
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.22
323 TestMountStart/serial/RestartStopped 18.74
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 129.41
328 TestMultiNode/serial/DeployApp2Nodes 6.08
329 TestMultiNode/serial/PingHostFrom2Pods 0.85
330 TestMultiNode/serial/AddNode 42.24
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 5.85
334 TestMultiNode/serial/StopNode 2.28
335 TestMultiNode/serial/StartAfterStop 41.38
336 TestMultiNode/serial/RestartKeepsNodes 294.62
337 TestMultiNode/serial/DeleteNode 2.66
338 TestMultiNode/serial/StopMultiNode 169.61
339 TestMultiNode/serial/RestartMultiNode 83.99
340 TestMultiNode/serial/ValidateNameConflict 39.75
347 TestScheduledStopUnix 109.55
351 TestRunningBinaryUpgrade 118.27
353 TestKubernetesUpgrade 181.98
365 TestStoppedBinaryUpgrade/Setup 2.72
366 TestStoppedBinaryUpgrade/Upgrade 151.7
371 TestNetworkPlugins/group/false 3.64
376 TestPause/serial/Start 76.42
378 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
379 TestNoKubernetes/serial/StartWithK8s 60.35
380 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
382 TestISOImage/Setup 33.17
383 TestNoKubernetes/serial/StartWithStopK8s 33.32
385 TestISOImage/Binaries/crictl 0.34
386 TestISOImage/Binaries/curl 0.18
387 TestISOImage/Binaries/docker 0.18
388 TestISOImage/Binaries/git 0.19
389 TestISOImage/Binaries/iptables 0.18
390 TestISOImage/Binaries/podman 0.18
391 TestISOImage/Binaries/rsync 0.19
392 TestISOImage/Binaries/socat 0.19
393 TestISOImage/Binaries/wget 0.18
394 TestISOImage/Binaries/VBoxControl 0.19
395 TestISOImage/Binaries/VBoxService 0.19
396 TestNoKubernetes/serial/Start 55.63
397 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
398 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
399 TestNoKubernetes/serial/ProfileList 1.72
400 TestNoKubernetes/serial/Stop 1.35
401 TestNoKubernetes/serial/StartNoArgs 32.85
402 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
404 TestStartStop/group/old-k8s-version/serial/FirstStart 91.07
406 TestStartStop/group/no-preload/serial/FirstStart 109.02
407 TestStartStop/group/old-k8s-version/serial/DeployApp 10.34
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
409 TestStartStop/group/old-k8s-version/serial/Stop 84.04
410 TestStartStop/group/no-preload/serial/DeployApp 10.34
411 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
412 TestStartStop/group/no-preload/serial/Stop 87.38
413 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
414 TestStartStop/group/old-k8s-version/serial/SecondStart 46.59
416 TestStartStop/group/embed-certs/serial/FirstStart 94.12
417 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
418 TestStartStop/group/no-preload/serial/SecondStart 65.61
419 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
420 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
421 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
422 TestStartStop/group/old-k8s-version/serial/Pause 2.99
424 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.76
426 TestStartStop/group/newest-cni/serial/FirstStart 75.61
427 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
428 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
429 TestStartStop/group/embed-certs/serial/DeployApp 12.35
430 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.16
431 TestStartStop/group/no-preload/serial/Pause 2.96
432 TestNetworkPlugins/group/auto/Start 85.84
433 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.44
434 TestStartStop/group/embed-certs/serial/Stop 87.53
435 TestStartStop/group/newest-cni/serial/DeployApp 0
436 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
437 TestStartStop/group/newest-cni/serial/Stop 8.08
438 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
439 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
440 TestStartStop/group/newest-cni/serial/SecondStart 41.22
441 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
442 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.34
443 TestNetworkPlugins/group/auto/KubeletFlags 0.17
444 TestNetworkPlugins/group/auto/NetCatPod 11.26
445 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
446 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
447 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.16
448 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/embed-certs/serial/SecondStart 48.67
450 TestStartStop/group/newest-cni/serial/Pause 3.21
451 TestNetworkPlugins/group/auto/DNS 0.19
452 TestNetworkPlugins/group/auto/Localhost 0.15
453 TestNetworkPlugins/group/auto/HairPin 0.16
454 TestNetworkPlugins/group/kindnet/Start 77.86
455 TestNetworkPlugins/group/calico/Start 99.72
456 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
457 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
458 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.68
459 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
460 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 1.13
461 TestStartStop/group/embed-certs/serial/Pause 4.42
462 TestNetworkPlugins/group/custom-flannel/Start 82.44
463 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
464 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
465 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
466 TestNetworkPlugins/group/kindnet/DNS 0.15
467 TestNetworkPlugins/group/kindnet/Localhost 0.12
468 TestNetworkPlugins/group/kindnet/HairPin 0.14
469 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.01
470 TestNetworkPlugins/group/calico/ControllerPod 6.01
471 TestNetworkPlugins/group/enable-default-cni/Start 87.26
472 TestNetworkPlugins/group/calico/KubeletFlags 0.19
473 TestNetworkPlugins/group/calico/NetCatPod 11.27
474 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
475 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.66
476 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.78
477 TestNetworkPlugins/group/calico/DNS 0.19
478 TestNetworkPlugins/group/calico/Localhost 0.16
479 TestNetworkPlugins/group/calico/HairPin 0.15
480 TestNetworkPlugins/group/flannel/Start 73.55
481 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
482 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
483 TestNetworkPlugins/group/bridge/Start 90.19
484 TestNetworkPlugins/group/custom-flannel/DNS 0.23
485 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
486 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
488 TestISOImage/PersistentMounts//data 0.18
489 TestISOImage/PersistentMounts//var/lib/docker 0.17
490 TestISOImage/PersistentMounts//var/lib/cni 0.18
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
492 TestISOImage/PersistentMounts//var/lib/minikube 0.2
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.19
495 TestISOImage/VersionJSON 0.18
496 TestISOImage/eBPFSupport 0.19
497 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
498 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
499 TestNetworkPlugins/group/flannel/ControllerPod 6.01
500 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
501 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
502 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
504 TestNetworkPlugins/group/flannel/NetCatPod 10.22
505 TestNetworkPlugins/group/flannel/DNS 0.17
506 TestNetworkPlugins/group/flannel/Localhost 0.15
507 TestNetworkPlugins/group/flannel/HairPin 0.13
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
509 TestNetworkPlugins/group/bridge/NetCatPod 11.23
510 TestNetworkPlugins/group/bridge/DNS 0.15
511 TestNetworkPlugins/group/bridge/Localhost 0.12
512 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (23.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-836257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-836257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.404995032s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 08:29:04.060263    9629 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 08:29:04.060358    9629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-836257
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-836257: exit status 85 (74.946683ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-836257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-836257 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:28:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:28:40.710204    9641 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:28:40.710453    9641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:40.710463    9641 out.go:374] Setting ErrFile to fd 2...
	I1124 08:28:40.710468    9641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:28:40.710674    9641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	W1124 08:28:40.710786    9641 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21978-5665/.minikube/config/config.json: open /home/jenkins/minikube-integration/21978-5665/.minikube/config/config.json: no such file or directory
	I1124 08:28:40.711264    9641 out.go:368] Setting JSON to true
	I1124 08:28:40.712234    9641 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":657,"bootTime":1763972264,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:28:40.712301    9641 start.go:143] virtualization: kvm guest
	I1124 08:28:40.716603    9641 out.go:99] [download-only-836257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:28:40.716755    9641 notify.go:221] Checking for updates...
	W1124 08:28:40.716776    9641 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 08:28:40.718170    9641 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:28:40.719581    9641 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:28:40.720790    9641 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:28:40.722011    9641 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:28:40.723503    9641 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:28:40.725987    9641 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:28:40.726273    9641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:28:41.243485    9641 out.go:99] Using the kvm2 driver based on user configuration
	I1124 08:28:41.243540    9641 start.go:309] selected driver: kvm2
	I1124 08:28:41.243546    9641 start.go:927] validating driver "kvm2" against <nil>
	I1124 08:28:41.243883    9641 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:28:41.244356    9641 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 08:28:41.244493    9641 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:28:41.244519    9641 cni.go:84] Creating CNI manager for ""
	I1124 08:28:41.244568    9641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 08:28:41.244576    9641 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 08:28:41.244615    9641 start.go:353] cluster config:
	{Name:download-only-836257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-836257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:28:41.244781    9641 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:28:41.246402    9641 out.go:99] Downloading VM boot image ...
	I1124 08:28:41.246435    9641 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1124 08:28:51.255009    9641 out.go:99] Starting "download-only-836257" primary control-plane node in "download-only-836257" cluster
	I1124 08:28:51.255056    9641 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 08:28:51.352113    9641 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1124 08:28:51.352144    9641 cache.go:65] Caching tarball of preloaded images
	I1124 08:28:51.352350    9641 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 08:28:51.354251    9641 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 08:28:51.354274    9641 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 08:28:51.452495    9641 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1124 08:28:51.452620    9641 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-836257 host does not exist
	  To start a cluster, run: "minikube start -p download-only-836257"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-836257
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (10.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-025261 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-025261 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.177183086s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (10.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1124 08:29:14.626550    9629 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1124 08:29:14.626603    9629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-025261
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-025261: exit status 85 (75.392562ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-836257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-836257 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-836257                                                                                                                                                 │ download-only-836257 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-025261 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-025261 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:29:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:29:04.500236    9889 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:29:04.500908    9889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:04.500917    9889 out.go:374] Setting ErrFile to fd 2...
	I1124 08:29:04.500921    9889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:04.501098    9889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:29:04.501552    9889 out.go:368] Setting JSON to true
	I1124 08:29:04.502356    9889 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":681,"bootTime":1763972264,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:29:04.502413    9889 start.go:143] virtualization: kvm guest
	I1124 08:29:04.504257    9889 out.go:99] [download-only-025261] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:29:04.504444    9889 notify.go:221] Checking for updates...
	I1124 08:29:04.505849    9889 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:29:04.507251    9889 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:29:04.508719    9889 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:29:04.510173    9889 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:29:04.511565    9889 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:29:04.514493    9889 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:29:04.514740    9889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:29:04.549725    9889 out.go:99] Using the kvm2 driver based on user configuration
	I1124 08:29:04.549767    9889 start.go:309] selected driver: kvm2
	I1124 08:29:04.549773    9889 start.go:927] validating driver "kvm2" against <nil>
	I1124 08:29:04.550067    9889 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:29:04.550529    9889 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 08:29:04.550661    9889 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:29:04.550684    9889 cni.go:84] Creating CNI manager for ""
	I1124 08:29:04.550727    9889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 08:29:04.550736    9889 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 08:29:04.550776    9889 start.go:353] cluster config:
	{Name:download-only-025261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-025261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:04.550859    9889 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:04.552328    9889 out.go:99] Starting "download-only-025261" primary control-plane node in "download-only-025261" cluster
	I1124 08:29:04.552352    9889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:05.006474    9889 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 08:29:05.006529    9889 cache.go:65] Caching tarball of preloaded images
	I1124 08:29:05.006738    9889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:05.008822    9889 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1124 08:29:05.008848    9889 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 08:29:05.105064    9889 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1124 08:29:05.105108    9889 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1124 08:29:13.902536    9889 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 08:29:13.902893    9889 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/download-only-025261/config.json ...
	I1124 08:29:13.902924    9889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/download-only-025261/config.json: {Name:mkfbfed87047f12374ea91baf50af838820f267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:13.903067    9889 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 08:29:13.903253    9889 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/linux/amd64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-025261 host does not exist
	  To start a cluster, run: "minikube start -p download-only-025261"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-025261
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (12.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-538093 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-538093 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.610038415s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (12.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
I1124 08:29:27.751028    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 08:29:28.031789    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 08:29:28.315450    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-538093
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-538093: exit status 85 (74.933066ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-836257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-836257 │ jenkins │ v1.37.0 │ 24 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-836257                                                                                                                                                        │ download-only-836257 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-025261 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-025261 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-025261                                                                                                                                                        │ download-only-025261 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-538093 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-538093 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:29:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:29:15.054116   10083 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:29:15.054221   10083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:15.054228   10083 out.go:374] Setting ErrFile to fd 2...
	I1124 08:29:15.054232   10083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:15.054407   10083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:29:15.054841   10083 out.go:368] Setting JSON to true
	I1124 08:29:15.055615   10083 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":691,"bootTime":1763972264,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:29:15.055668   10083 start.go:143] virtualization: kvm guest
	I1124 08:29:15.057710   10083 out.go:99] [download-only-538093] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:29:15.057842   10083 notify.go:221] Checking for updates...
	I1124 08:29:15.059075   10083 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:29:15.060742   10083 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:29:15.062150   10083 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:29:15.063443   10083 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:29:15.064823   10083 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:29:15.067115   10083 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:29:15.067347   10083 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:29:15.098196   10083 out.go:99] Using the kvm2 driver based on user configuration
	I1124 08:29:15.098232   10083 start.go:309] selected driver: kvm2
	I1124 08:29:15.098238   10083 start.go:927] validating driver "kvm2" against <nil>
	I1124 08:29:15.098568   10083 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:29:15.099142   10083 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 08:29:15.099340   10083 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:29:15.099374   10083 cni.go:84] Creating CNI manager for ""
	I1124 08:29:15.099432   10083 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 08:29:15.099444   10083 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 08:29:15.099518   10083 start.go:353] cluster config:
	{Name:download-only-538093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-538093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:15.099634   10083 iso.go:125] acquiring lock: {Name:mk18ecb32e798e36e9a21981d14605467064f612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:15.101155   10083 out.go:99] Starting "download-only-538093" primary control-plane node in "download-only-538093" cluster
	I1124 08:29:15.101191   10083 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1124 08:29:15.204901   10083 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1124 08:29:15.411720   10083 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1124 08:29:15.412090   10083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/download-only-538093/config.json ...
	I1124 08:29:15.412124   10083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/download-only-538093/config.json: {Name:mkf6446f0a3024e982fc8fa2c853938a934a23b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:15.412269   10083 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1124 08:29:15.412310   10083 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 08:29:15.412472   10083 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	I1124 08:29:15.412668   10083 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21978-5665/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1124 08:29:15.515220   10083 out.go:99] Another minikube instance is downloading dependencies... 
	I1124 08:29:24.133424   10083 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:24.133567   10083 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 08:29:24.443922   10083 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:24.728015   10083 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 08:29:25.013744   10083 cache.go:107] acquiring lock: {Name:mkd012b56d6bb314838e8477fa61cbc9a5cb6182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013768   10083 cache.go:107] acquiring lock: {Name:mkc9a0c6b55838e55cce5ad7bc53cddbd14b524c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013782   10083 cache.go:107] acquiring lock: {Name:mk873476b8b51c5ad30a5f207562c122a407baa7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013785   10083 cache.go:107] acquiring lock: {Name:mk25a8e984499d9056c7556923373a6a0424ac0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013819   10083 cache.go:107] acquiring lock: {Name:mk59e7d3324e6d5caf067ed3caccff0e089892d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013819   10083 cache.go:107] acquiring lock: {Name:mk7b9d9c6ed27d19c384d6cbe702bfd1c838c06e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013744   10083 cache.go:107] acquiring lock: {Name:mk843be7defe78f14bd5310432fc15bd3fb06fcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.013882   10083 cache.go:107] acquiring lock: {Name:mk8faa0d7d5001227c8e0f6859d07215668f8c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 08:29:25.014073   10083 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 08:29:25.014118   10083 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 08:29:25.014150   10083 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 08:29:25.014155   10083 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 08:29:25.014177   10083 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 08:29:25.014109   10083 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 08:29:25.014195   10083 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 08:29:25.014109   10083 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 08:29:25.015368   10083 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 08:29:25.015397   10083 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 08:29:25.015368   10083 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 08:29:25.015369   10083 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 08:29:25.015429   10083 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 08:29:25.015448   10083 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 08:29:25.015448   10083 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 08:29:25.015434   10083 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	
	
	* The control-plane node download-only-538093 host does not exist
	  To start a cluster, run: "minikube start -p download-only-538093"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-538093
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 08:29:29.413575    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-974725 --alsologtostderr --binary-mirror http://127.0.0.1:40623 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-974725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-974725
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (85.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-776898 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-776898 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.899675969s)
helpers_test.go:175: Cleaning up "offline-crio-776898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-776898
--- PASS: TestOffline (85.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-076740
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-076740: exit status 85 (71.570302ms)

                                                
                                                
-- stdout --
	* Profile "addons-076740" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-076740"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-076740
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-076740: exit status 85 (70.495139ms)

                                                
                                                
-- stdout --
	* Profile "addons-076740" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-076740"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (133.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-076740 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-076740 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.852144095s)
--- PASS: TestAddons/Setup (133.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-076740 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-076740 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-076740 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-076740 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9deb25b0-61ba-41f6-85a6-166e22652eb7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9deb25b0-61ba-41f6-85a6-166e22652eb7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00421407s
addons_test.go:694: (dbg) Run:  kubectl --context addons-076740 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-076740 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-076740 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.262047ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-bbszd" [f5bd4f8d-32c3-4226-9f47-38d07eaa1ddd] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004848033s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-24jnp" [d30eb954-1f06-43b1-98ff-136a4042942e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004475412s
addons_test.go:392: (dbg) Run:  kubectl --context addons-076740 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-076740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-076740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.926161742s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 ip
2025/11/24 08:32:23 [DEBUG] GET http://192.168.39.17:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.73s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 9.788321ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-076740
addons_test.go:332: (dbg) Run:  kubectl --context addons-076740 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bc77l" [a47a4467-2d19-488b-8c19-7f741aa2020b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004997751s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable inspektor-gadget --alsologtostderr -v=1: (5.78341565s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.283634ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vqcv6" [ac18afbb-29f5-4ebe-a88d-e67822460468] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006828582s
addons_test.go:463: (dbg) Run:  kubectl --context addons-076740 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable metrics-server --alsologtostderr -v=1: (1.121152607s)
--- PASS: TestAddons/parallel/MetricsServer (6.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 08:32:09.479192    9629 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 08:32:09.485186    9629 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 08:32:09.485212    9629 kapi.go:107] duration metric: took 6.059781ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.069895ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-076740 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-076740 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [be5e4f00-d4d5-4cb3-9be8-0665b94e237e] Pending
helpers_test.go:352: "task-pv-pod" [be5e4f00-d4d5-4cb3-9be8-0665b94e237e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [be5e4f00-d4d5-4cb3-9be8-0665b94e237e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.012031613s
addons_test.go:572: (dbg) Run:  kubectl --context addons-076740 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-076740 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-076740 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-076740 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-076740 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-076740 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-076740 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [78a38998-0c76-490b-8e8b-bd7ab85ae84b] Pending
helpers_test.go:352: "task-pv-pod-restore" [78a38998-0c76-490b-8e8b-bd7ab85ae84b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [78a38998-0c76-490b-8e8b-bd7ab85ae84b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003988184s
addons_test.go:614: (dbg) Run:  kubectl --context addons-076740 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-076740 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-076740 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.997982837s)
--- PASS: TestAddons/parallel/CSI (36.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-076740 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-tmjz9" [07d9a101-eb99-404d-89f9-ca4ed3af3c17] Pending
helpers_test.go:352: "headlamp-dfcdc64b-tmjz9" [07d9a101-eb99-404d-89f9-ca4ed3af3c17] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-tmjz9" [07d9a101-eb99-404d-89f9-ca4ed3af3c17] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00497831s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable headlamp --alsologtostderr -v=1: (6.276015124s)
--- PASS: TestAddons/parallel/Headlamp (20.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-n6vx7" [d411eaa2-f77c-408e-ab57-7e9808c99a6b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003735341s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-076740 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-076740 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d724733b-60b2-44d7-a27c-3939fc77f642] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d724733b-60b2-44d7-a27c-3939fc77f642] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d724733b-60b2-44d7-a27c-3939fc77f642] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003775548s
addons_test.go:967: (dbg) Run:  kubectl --context addons-076740 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 ssh "cat /opt/local-path-provisioner/pvc-600f5cd9-f262-49bb-b127-38831b9747e0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-076740 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-076740 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.837482468s)
--- PASS: TestAddons/parallel/LocalPath (55.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-c78hg" [4920cb6b-84d5-44d0-96e9-8aac9b76c9e0] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.028535602s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zf86l" [b1d8325e-6409-45ff-9110-800a776ddf5b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.025558401s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-076740 addons disable yakd --alsologtostderr -v=1: (5.926959564s)
--- PASS: TestAddons/parallel/Yakd (11.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (84.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-076740
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-076740: (1m24.144824585s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-076740
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-076740
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-076740
--- PASS: TestAddons/StoppedEnableDisable (84.35s)

                                                
                                    
x
+
TestCertOptions (60.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-322176 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-322176 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.618029632s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-322176 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-322176 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-322176 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-322176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-322176
--- PASS: TestCertOptions (60.84s)

                                                
                                    
x
+
TestCertExpiration (295.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-986811 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-986811 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.681606546s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-986811 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-986811 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (48.933188632s)
helpers_test.go:175: Cleaning up "cert-expiration-986811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-986811
--- PASS: TestCertExpiration (295.46s)

                                                
                                    
x
+
TestForceSystemdFlag (53.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-703366 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1124 09:41:00.040469    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-703366 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.258833333s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-703366 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-703366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-703366
--- PASS: TestForceSystemdFlag (53.23s)

                                                
                                    
x
+
TestForceSystemdEnv (51.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-371303 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1124 09:41:55.507583    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-371303 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.43315781s)
helpers_test.go:175: Cleaning up "force-systemd-env-371303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-371303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-371303: (1.875993263s)
--- PASS: TestForceSystemdEnv (51.31s)

                                                
                                    
x
+
TestErrorSpam/setup (39.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-575434 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-575434 --driver=kvm2  --container-runtime=crio
E1124 08:36:44.612763    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:44.619189    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:44.630605    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:44.652036    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:44.693450    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:44.774940    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:44.936577    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:45.258296    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:45.900103    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:47.182449    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:49.744228    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:36:54.867467    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-575434 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-575434 --driver=kvm2  --container-runtime=crio: (39.42477947s)
--- PASS: TestErrorSpam/setup (39.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 pause
E1124 08:37:05.109854    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (86.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 stop
E1124 08:37:25.591811    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:38:06.554550    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 stop: (1m23.042720911s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 stop: (1.819988421s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-575434 --log_dir /tmp/nospam-575434 stop: (1.696925259s)
--- PASS: TestErrorSpam/stop (86.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/test/nested/copy/9629/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843072 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1124 08:39:28.478665    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-843072 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.594832521s)
--- PASS: TestFunctional/serial/StartWithProxy (80.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (62.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 08:39:55.109076    9629 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843072 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-843072 --alsologtostderr -v=8: (1m2.304246682s)
functional_test.go:678: soft start took 1m2.305052181s for "functional-843072" cluster.
I1124 08:40:57.413755    9629 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (62.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-843072 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 cache add registry.k8s.io/pause:3.1: (1.344002235s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 cache add registry.k8s.io/pause:3.3: (1.437271443s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 cache add registry.k8s.io/pause:latest: (1.253238306s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-843072 /tmp/TestFunctionalserialCacheCmdcacheadd_local2539176845/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cache add minikube-local-cache-test:functional-843072
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 cache add minikube-local-cache-test:functional-843072: (1.765955677s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cache delete minikube-local-cache-test:functional-843072
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-843072
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (171.041715ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 kubectl -- --context functional-843072 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-843072 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843072 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 08:41:44.614462    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-843072 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.193423304s)
functional_test.go:776: restart took 42.193537179s for "functional-843072" cluster.
I1124 08:41:48.080862    9629 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (42.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-843072 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 logs: (1.25543907s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 logs --file /tmp/TestFunctionalserialLogsFileCmd1727219307/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 logs --file /tmp/TestFunctionalserialLogsFileCmd1727219307/001/logs.txt: (1.29931301s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-843072 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-843072
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-843072: exit status 115 (227.424691ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.118:32688 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-843072 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 config get cpus: exit status 14 (58.562322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 config get cpus: exit status 14 (68.619151ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843072 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843072 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 16514: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-843072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (110.012621ms)

                                                
                                                
-- stdout --
	* [functional-843072] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:42:19.827316   16430 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:42:19.827559   16430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:42:19.827568   16430 out.go:374] Setting ErrFile to fd 2...
	I1124 08:42:19.827574   16430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:42:19.827790   16430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:42:19.828228   16430 out.go:368] Setting JSON to false
	I1124 08:42:19.829093   16430 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1476,"bootTime":1763972264,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:42:19.829151   16430 start.go:143] virtualization: kvm guest
	I1124 08:42:19.831100   16430 out.go:179] * [functional-843072] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:42:19.832261   16430 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:42:19.832267   16430 notify.go:221] Checking for updates...
	I1124 08:42:19.834425   16430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:42:19.835653   16430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:42:19.836873   16430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:42:19.837973   16430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:42:19.839181   16430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:42:19.840743   16430 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:42:19.841197   16430 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:42:19.872195   16430 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 08:42:19.873481   16430 start.go:309] selected driver: kvm2
	I1124 08:42:19.873496   16430 start.go:927] validating driver "kvm2" against &{Name:functional-843072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-843072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:42:19.873608   16430 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:42:19.875484   16430 out.go:203] 
	W1124 08:42:19.876613   16430 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 08:42:19.877723   16430 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843072 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-843072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (109.756594ms)

                                                
                                                
-- stdout --
	* [functional-843072] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:42:20.044887   16462 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:42:20.044988   16462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:42:20.044992   16462 out.go:374] Setting ErrFile to fd 2...
	I1124 08:42:20.044997   16462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:42:20.045283   16462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:42:20.045676   16462 out.go:368] Setting JSON to false
	I1124 08:42:20.046495   16462 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1476,"bootTime":1763972264,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:42:20.046546   16462 start.go:143] virtualization: kvm guest
	I1124 08:42:20.048290   16462 out.go:179] * [functional-843072] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:42:20.049529   16462 notify.go:221] Checking for updates...
	I1124 08:42:20.049601   16462 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:42:20.050917   16462 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:42:20.052110   16462 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:42:20.053253   16462 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:42:20.054402   16462 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:42:20.055580   16462 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:42:20.057038   16462 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 08:42:20.057484   16462 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:42:20.088689   16462 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1124 08:42:20.090856   16462 start.go:309] selected driver: kvm2
	I1124 08:42:20.090874   16462 start.go:927] validating driver "kvm2" against &{Name:functional-843072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-843072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:42:20.090982   16462 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:42:20.092978   16462 out.go:203] 
	W1124 08:42:20.094348   16462 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:42:20.095786   16462 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-843072 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-843072 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2rd5v" [7d60f44a-cdad-4dd7-b5d9-7a277582081a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2rd5v" [7d60f44a-cdad-4dd7-b5d9-7a277582081a] Running
E1124 08:42:12.320552    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.015125772s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.118:30446
functional_test.go:1680: http://192.168.39.118:30446: success! body:
Request served by hello-node-connect-7d85dfc575-2rd5v

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.118:30446
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c9678f57-9ab4-4413-95c0-48256874e86f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005141487s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-843072 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-843072 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-843072 get pvc myclaim -o=json
I1124 08:42:02.670966    9629 retry.go:31] will retry after 1.99802886s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a17b036d-72fd-47a1-b81e-4adab27ce84d ResourceVersion:824 Generation:0 CreationTimestamp:2025-11-24 08:42:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00172afb0 VolumeMode:0xc00172afc0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-843072 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-843072 apply -f testdata/storage-provisioner/pod.yaml
I1124 08:42:04.932773    9629 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [19363408-e495-4a1e-b626-2d802a7e4f05] Pending
helpers_test.go:352: "sp-pod" [19363408-e495-4a1e-b626-2d802a7e4f05] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [19363408-e495-4a1e-b626-2d802a7e4f05] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004887974s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-843072 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-843072 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-843072 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ab5b8960-2a1b-4a53-bd94-61b6865bb307] Pending
helpers_test.go:352: "sp-pod" [ab5b8960-2a1b-4a53-bd94-61b6865bb307] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ab5b8960-2a1b-4a53-bd94-61b6865bb307] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004736878s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-843072 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh -n functional-843072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cp functional-843072:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3242246925/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh -n functional-843072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh -n functional-843072 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-843072 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-2gxnm" [4dfd5e83-d4c6-4652-bff3-9c267f476b73] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-2gxnm" [4dfd5e83-d4c6-4652-bff3-9c267f476b73] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004397081s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843072 exec mysql-5bb876957f-2gxnm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843072 exec mysql-5bb876957f-2gxnm -- mysql -ppassword -e "show databases;": exit status 1 (397.71155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:42:14.909010    9629 retry.go:31] will retry after 523.172066ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843072 exec mysql-5bb876957f-2gxnm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843072 exec mysql-5bb876957f-2gxnm -- mysql -ppassword -e "show databases;": exit status 1 (126.800244ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:42:15.559406    9629 retry.go:31] will retry after 2.028079471s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843072 exec mysql-5bb876957f-2gxnm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9629/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /etc/test/nested/copy/9629/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9629.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /etc/ssl/certs/9629.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9629.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /usr/share/ca-certificates/9629.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/96292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /etc/ssl/certs/96292.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/96292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /usr/share/ca-certificates/96292.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-843072 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh "sudo systemctl is-active docker": exit status 1 (217.810628ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh "sudo systemctl is-active containerd": exit status 1 (243.89161ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-843072 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-843072 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fcf44" [d7fc165d-dba7-4e72-910e-0acd6b69909d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-fcf44" [d7fc165d-dba7-4e72-910e-0acd6b69909d] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.004807877s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 service list -o json
functional_test.go:1504: Took "421.497901ms" to run "out/minikube-linux-amd64 -p functional-843072 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.118:30161
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.118:30161
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdany-port3988919969/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763973738609932675" to /tmp/TestFunctionalparallelMountCmdany-port3988919969/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763973738609932675" to /tmp/TestFunctionalparallelMountCmdany-port3988919969/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763973738609932675" to /tmp/TestFunctionalparallelMountCmdany-port3988919969/001/test-1763973738609932675
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.139598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:42:18.841429    9629 retry.go:31] will retry after 294.984474ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 08:42 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 08:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 08:42 test-1763973738609932675
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh cat /mount-9p/test-1763973738609932675
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-843072 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [7c5b8768-b1ac-455b-a1a9-152a7d1a208c] Pending
helpers_test.go:352: "busybox-mount" [7c5b8768-b1ac-455b-a1a9-152a7d1a208c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [7c5b8768-b1ac-455b-a1a9-152a7d1a208c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [7c5b8768-b1ac-455b-a1a9-152a7d1a208c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005335545s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-843072 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdany-port3988919969/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "262.786964ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.156267ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843072 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-843072
localhost/kicbase/echo-server:functional-843072
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843072 image ls --format short --alsologtostderr:
I1124 08:42:28.964369   16897 out.go:360] Setting OutFile to fd 1 ...
I1124 08:42:28.964604   16897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:28.964612   16897 out.go:374] Setting ErrFile to fd 2...
I1124 08:42:28.964617   16897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:28.964833   16897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:42:28.965355   16897 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:28.965449   16897 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:28.967446   16897 ssh_runner.go:195] Run: systemctl --version
I1124 08:42:28.970284   16897 main.go:143] libmachine: domain functional-843072 has defined MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:28.970704   16897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:e6:52", ip: ""} in network mk-functional-843072: {Iface:virbr1 ExpiryTime:2025-11-24 09:38:49 +0000 UTC Type:0 Mac:52:54:00:86:e6:52 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-843072 Clientid:01:52:54:00:86:e6:52}
I1124 08:42:28.970729   16897 main.go:143] libmachine: domain functional-843072 has defined IP address 192.168.39.118 and MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:28.970874   16897 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-843072/id_rsa Username:docker}
I1124 08:42:29.069288   16897 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843072 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-843072  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-843072  │ cf6e95494defb │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843072 image ls --format table --alsologtostderr:
I1124 08:42:30.988819   17031 out.go:360] Setting OutFile to fd 1 ...
I1124 08:42:30.989074   17031 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:30.989085   17031 out.go:374] Setting ErrFile to fd 2...
I1124 08:42:30.989091   17031 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:30.989329   17031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:42:30.989891   17031 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:30.990006   17031 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:30.991908   17031 ssh_runner.go:195] Run: systemctl --version
I1124 08:42:30.993763   17031 main.go:143] libmachine: domain functional-843072 has defined MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:30.994134   17031 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:e6:52", ip: ""} in network mk-functional-843072: {Iface:virbr1 ExpiryTime:2025-11-24 09:38:49 +0000 UTC Type:0 Mac:52:54:00:86:e6:52 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-843072 Clientid:01:52:54:00:86:e6:52}
I1124 08:42:30.994202   17031 main.go:143] libmachine: domain functional-843072 has defined IP address 192.168.39.118 and MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:30.994357   17031 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-843072/id_rsa Username:docker}
I1124 08:42:31.087842   17031 ssh_runner.go:195] Run: sudo crictl images --output json
2025/11/24 08:42:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843072 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-843072"],"size":"4945146"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf1400
4181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k
8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a
8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha25
6:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7
416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"cf6e95494defba470d49399296c398c88b8f58f324779ef4a886e73214342087","repoDigests":["localhost/minikube-local-cache-test@sha256:3981e404750131ea0333ea6279aed8ab965aedc415cac16b8456aed6737bc6bb"],"repoTags":["localhost/minikube-local-cache-test:functional-843072"],"size":"3330"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d2
1560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843072 image ls --format json --alsologtostderr:
I1124 08:42:30.781184   17021 out.go:360] Setting OutFile to fd 1 ...
I1124 08:42:30.781267   17021 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:30.781271   17021 out.go:374] Setting ErrFile to fd 2...
I1124 08:42:30.781275   17021 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:30.781498   17021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:42:30.782049   17021 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:30.782179   17021 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:30.784432   17021 ssh_runner.go:195] Run: systemctl --version
I1124 08:42:30.786683   17021 main.go:143] libmachine: domain functional-843072 has defined MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:30.787036   17021 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:e6:52", ip: ""} in network mk-functional-843072: {Iface:virbr1 ExpiryTime:2025-11-24 09:38:49 +0000 UTC Type:0 Mac:52:54:00:86:e6:52 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-843072 Clientid:01:52:54:00:86:e6:52}
I1124 08:42:30.787060   17021 main.go:143] libmachine: domain functional-843072 has defined IP address 192.168.39.118 and MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:30.787204   17021 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-843072/id_rsa Username:docker}
I1124 08:42:30.874821   17021 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843072 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-843072
size: "4945146"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cf6e95494defba470d49399296c398c88b8f58f324779ef4a886e73214342087
repoDigests:
- localhost/minikube-local-cache-test@sha256:3981e404750131ea0333ea6279aed8ab965aedc415cac16b8456aed6737bc6bb
repoTags:
- localhost/minikube-local-cache-test:functional-843072
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843072 image ls --format yaml --alsologtostderr:
I1124 08:42:29.210635   16908 out.go:360] Setting OutFile to fd 1 ...
I1124 08:42:29.210810   16908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:29.210867   16908 out.go:374] Setting ErrFile to fd 2...
I1124 08:42:29.210874   16908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:29.211202   16908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:42:29.211912   16908 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:29.212053   16908 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:29.218069   16908 ssh_runner.go:195] Run: systemctl --version
I1124 08:42:29.225220   16908 main.go:143] libmachine: domain functional-843072 has defined MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:29.226461   16908 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:e6:52", ip: ""} in network mk-functional-843072: {Iface:virbr1 ExpiryTime:2025-11-24 09:38:49 +0000 UTC Type:0 Mac:52:54:00:86:e6:52 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-843072 Clientid:01:52:54:00:86:e6:52}
I1124 08:42:29.226662   16908 main.go:143] libmachine: domain functional-843072 has defined IP address 192.168.39.118 and MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:29.226940   16908 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-843072/id_rsa Username:docker}
I1124 08:42:29.350683   16908 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh pgrep buildkitd: exit status 1 (204.590997ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image build -t localhost/my-image:functional-843072 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 image build -t localhost/my-image:functional-843072 testdata/build --alsologtostderr: (5.991923399s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843072 image build -t localhost/my-image:functional-843072 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ed8af6a9686
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-843072
--> 099968dcf81
Successfully tagged localhost/my-image:functional-843072
099968dcf8118aed68bcc46b308cf954974a40cff4986722c767335d47a8f898
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843072 image build -t localhost/my-image:functional-843072 testdata/build --alsologtostderr:
I1124 08:42:29.689351   16968 out.go:360] Setting OutFile to fd 1 ...
I1124 08:42:29.689673   16968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:29.689684   16968 out.go:374] Setting ErrFile to fd 2...
I1124 08:42:29.689688   16968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:42:29.689878   16968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:42:29.690431   16968 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:29.690962   16968 config.go:182] Loaded profile config "functional-843072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 08:42:29.693305   16968 ssh_runner.go:195] Run: systemctl --version
I1124 08:42:29.696046   16968 main.go:143] libmachine: domain functional-843072 has defined MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:29.696586   16968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:e6:52", ip: ""} in network mk-functional-843072: {Iface:virbr1 ExpiryTime:2025-11-24 09:38:49 +0000 UTC Type:0 Mac:52:54:00:86:e6:52 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-843072 Clientid:01:52:54:00:86:e6:52}
I1124 08:42:29.696615   16968 main.go:143] libmachine: domain functional-843072 has defined IP address 192.168.39.118 and MAC address 52:54:00:86:e6:52 in network mk-functional-843072
I1124 08:42:29.696761   16968 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-843072/id_rsa Username:docker}
I1124 08:42:29.796067   16968 build_images.go:162] Building image from path: /tmp/build.3203921652.tar
I1124 08:42:29.796139   16968 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 08:42:29.828672   16968 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3203921652.tar
I1124 08:42:29.844632   16968 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3203921652.tar: stat -c "%s %y" /var/lib/minikube/build/build.3203921652.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3203921652.tar': No such file or directory
I1124 08:42:29.844674   16968 ssh_runner.go:362] scp /tmp/build.3203921652.tar --> /var/lib/minikube/build/build.3203921652.tar (3072 bytes)
I1124 08:42:29.906183   16968 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3203921652
I1124 08:42:29.926794   16968 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3203921652 -xf /var/lib/minikube/build/build.3203921652.tar
I1124 08:42:29.942912   16968 crio.go:315] Building image: /var/lib/minikube/build/build.3203921652
I1124 08:42:29.943011   16968 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-843072 /var/lib/minikube/build/build.3203921652 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 08:42:35.592675   16968 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-843072 /var/lib/minikube/build/build.3203921652 --cgroup-manager=cgroupfs: (5.649638616s)
I1124 08:42:35.592740   16968 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3203921652
I1124 08:42:35.607145   16968 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3203921652.tar
I1124 08:42:35.620627   16968 build_images.go:218] Built localhost/my-image:functional-843072 from /tmp/build.3203921652.tar
I1124 08:42:35.620668   16968 build_images.go:134] succeeded building to: functional-843072
I1124 08:42:35.620673   16968 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.761149132s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-843072
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "247.218219ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.438979ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image load --daemon kicbase/echo-server:functional-843072 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-843072 image load --daemon kicbase/echo-server:functional-843072 --alsologtostderr: (1.262945628s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image load --daemon kicbase/echo-server:functional-843072 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-843072
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image load --daemon kicbase/echo-server:functional-843072 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image save kicbase/echo-server:functional-843072 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image rm kicbase/echo-server:functional-843072 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls
I1124 08:42:25.966929    9629 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-843072
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 image save --daemon kicbase/echo-server:functional-843072 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-843072
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdspecific-port303728507/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.638638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:42:27.815616    9629 retry.go:31] will retry after 566.552746ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdspecific-port303728507/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh "sudo umount -f /mount-9p": exit status 1 (178.896979ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-843072 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdspecific-port303728507/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1962488019/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1962488019/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1962488019/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T" /mount1: exit status 1 (243.343241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:42:29.394876    9629 retry.go:31] will retry after 691.318836ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843072 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-843072 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1962488019/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1962488019/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1962488019/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-843072
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-843072
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-843072
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-5665/.minikube/files/etc/test/nested/copy/9629/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (87.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014740 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-014740 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m27.288664454s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (87.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (56.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1124 08:44:09.419212    9629 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014740 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-014740 --alsologtostderr -v=8: (56.415922823s)
functional_test.go:678: soft start took 56.416291129s for "functional-014740" cluster.
I1124 08:45:05.835533    9629 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (56.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-014740 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 cache add registry.k8s.io/pause:3.1: (1.093971224s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 cache add registry.k8s.io/pause:3.3: (1.222289529s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 cache add registry.k8s.io/pause:latest: (1.14320825s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2581022837/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cache add minikube-local-cache-test:functional-014740
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 cache add minikube-local-cache-test:functional-014740: (1.761563316s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cache delete minikube-local-cache-test:functional-014740
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-014740
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (166.249962ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 cache reload: (1.23418377s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 kubectl -- --context functional-014740 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-014740 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (39.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014740 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-014740 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.117806498s)
functional_test.go:776: restart took 39.117909825s for "functional-014740" cluster.
I1124 08:45:53.027550    9629 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (39.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-014740 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 logs: (1.270083894s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs364706785/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs364706785/001/logs.txt: (1.251575661s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-014740 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-014740
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-014740: exit status 115 (219.209102ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.85:32451 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-014740 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-014740 delete -f testdata/invalidsvc.yaml: (1.01786107s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 config get cpus: exit status 14 (67.983401ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 config get cpus: exit status 14 (58.735794ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (25.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-014740 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-014740 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 19567: os: process already finished
E1124 08:46:44.613220    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.507049    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.513426    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.524830    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.546243    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.587726    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.669228    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:55.830661    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:56.152363    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:56.794019    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:46:58.076054    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:47:00.637493    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:47:05.759574    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:47:16.000885    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:47:36.482842    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:48:17.444571    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:49:39.366271    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:44.612654    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:51:55.507587    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (25.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014740 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-014740 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (126.693181ms)

                                                
                                                
-- stdout --
	* [functional-014740] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:46:11.480543   19362 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:46:11.480833   19362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.480847   19362 out.go:374] Setting ErrFile to fd 2...
	I1124 08:46:11.480853   19362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.481181   19362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:46:11.481756   19362 out.go:368] Setting JSON to false
	I1124 08:46:11.482893   19362 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1707,"bootTime":1763972264,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:46:11.482971   19362 start.go:143] virtualization: kvm guest
	I1124 08:46:11.485623   19362 out.go:179] * [functional-014740] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:46:11.486884   19362 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:46:11.486883   19362 notify.go:221] Checking for updates...
	I1124 08:46:11.489837   19362 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:46:11.491250   19362 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:46:11.492699   19362 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:46:11.494006   19362 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:46:11.495096   19362 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:46:11.496853   19362 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:46:11.497546   19362 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:46:11.531014   19362 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 08:46:11.532216   19362 start.go:309] selected driver: kvm2
	I1124 08:46:11.532233   19362 start.go:927] validating driver "kvm2" against &{Name:functional-014740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-014740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:46:11.532348   19362 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:46:11.534605   19362 out.go:203] 
	W1124 08:46:11.535712   19362 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 08:46:11.536976   19362 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014740 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014740 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-014740 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (119.084213ms)

                                                
                                                
-- stdout --
	* [functional-014740] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:46:11.714620   19393 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:46:11.714773   19393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.714785   19393 out.go:374] Setting ErrFile to fd 2...
	I1124 08:46:11.714792   19393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:11.715248   19393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 08:46:11.715862   19393 out.go:368] Setting JSON to false
	I1124 08:46:11.717025   19393 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1708,"bootTime":1763972264,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:46:11.717101   19393 start.go:143] virtualization: kvm guest
	I1124 08:46:11.719049   19393 out.go:179] * [functional-014740] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:46:11.720406   19393 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:46:11.720406   19393 notify.go:221] Checking for updates...
	I1124 08:46:11.721700   19393 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:46:11.723058   19393 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 08:46:11.724430   19393 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 08:46:11.725668   19393 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:46:11.727020   19393 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:46:11.728756   19393 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 08:46:11.729435   19393 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:46:11.761441   19393 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1124 08:46:11.762608   19393 start.go:309] selected driver: kvm2
	I1124 08:46:11.762626   19393 start.go:927] validating driver "kvm2" against &{Name:functional-014740 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-014740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.85 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:46:11.762766   19393 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:46:11.764721   19393 out.go:203] 
	W1124 08:46:11.766067   19393 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:46:11.767221   19393 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-014740 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-014740 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-2f89s" [b617478e-9821-4355-a955-f4a6ffbf53b1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-2f89s" [b617478e-9821-4355-a955-f4a6ffbf53b1] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.007877421s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.85:31626
functional_test.go:1680: http://192.168.39.85:31626: success! body:
Request served by hello-node-connect-9f67c86d4-2f89s

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.85:31626
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh -n functional-014740 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cp functional-014740:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1099325095/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh -n functional-014740 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh -n functional-014740 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9629/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /etc/test/nested/copy/9629/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9629.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /etc/ssl/certs/9629.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9629.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /usr/share/ca-certificates/9629.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/96292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /etc/ssl/certs/96292.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/96292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /usr/share/ca-certificates/96292.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-014740 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh "sudo systemctl is-active docker": exit status 1 (260.362196ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh "sudo systemctl is-active containerd": exit status 1 (232.22166ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-014740 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-014740 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-pn7vs" [eca6c3f4-81b7-46d0-ac96-127a71d45d64] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-pn7vs" [eca6c3f4-81b7-46d0-ac96-127a71d45d64] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.00357907s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "248.723951ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.222763ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "285.330766ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.003528ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3465841280/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763973963116202591" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3465841280/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763973963116202591" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3465841280/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763973963116202591" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3465841280/001/test-1763973963116202591
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (180.893593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:46:03.297377    9629 retry.go:31] will retry after 312.640108ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 08:46 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 08:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 08:46 test-1763973963116202591
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh cat /mount-9p/test-1763973963116202591
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-014740 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [58be346b-a3c6-494c-864a-b5b43f398892] Pending
helpers_test.go:352: "busybox-mount" [58be346b-a3c6-494c-864a-b5b43f398892] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [58be346b-a3c6-494c-864a-b5b43f398892] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [58be346b-a3c6-494c-864a-b5b43f398892] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007010719s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-014740 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3465841280/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 service list -o json
functional_test.go:1504: Took "205.219161ms" to run "out/minikube-linux-amd64 -p functional-014740 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.85:31249
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.85:31249
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo671544991/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.085949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:46:11.272734    9629 retry.go:31] will retry after 584.599034ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo671544991/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh "sudo umount -f /mount-9p": exit status 1 (213.349914ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-014740 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo671544991/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014740 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/etcd:3.5.24-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-014740
localhost/kicbase/echo-server:functional-014740
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014740 image ls --format short --alsologtostderr:
I1124 08:46:24.135255   19925 out.go:360] Setting OutFile to fd 1 ...
I1124 08:46:24.135493   19925 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:24.135505   19925 out.go:374] Setting ErrFile to fd 2...
I1124 08:46:24.135510   19925 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:24.135688   19925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:46:24.136175   19925 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:24.136271   19925 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:24.138495   19925 ssh_runner.go:195] Run: systemctl --version
I1124 08:46:24.140890   19925 main.go:143] libmachine: domain functional-014740 has defined MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:24.141312   19925 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:7e:4b", ip: ""} in network mk-functional-014740: {Iface:virbr1 ExpiryTime:2025-11-24 09:42:57 +0000 UTC Type:0 Mac:52:54:00:84:7e:4b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:functional-014740 Clientid:01:52:54:00:84:7e:4b}
I1124 08:46:24.141337   19925 main.go:143] libmachine: domain functional-014740 has defined IP address 192.168.39.85 and MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:24.141494   19925 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-014740/id_rsa Username:docker}
I1124 08:46:24.220442   19925 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014740 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1               │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ latest            │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc      │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-014740 │ cf6e95494defb │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 52.7MB │
│ localhost/my-image                      │ functional-014740 │ bd528124354af │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.3               │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest            │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.5.24-0          │ 8cb12dd0c3e42 │ 66.2MB │
│ registry.k8s.io/pause                   │ 3.10.1            │ cd073f4c5f6a8 │ 740kB  │
│ docker.io/kicbase/echo-server           │ latest            │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-014740 │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ latest            │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ aa9d02839d8de │ 90.8MB │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014740 image ls --format table --alsologtostderr:
I1124 08:46:28.199599   20007 out.go:360] Setting OutFile to fd 1 ...
I1124 08:46:28.199828   20007 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:28.199837   20007 out.go:374] Setting ErrFile to fd 2...
I1124 08:46:28.199841   20007 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:28.200030   20007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:46:28.200535   20007 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:28.200627   20007 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:28.202586   20007 ssh_runner.go:195] Run: systemctl --version
I1124 08:46:28.204431   20007 main.go:143] libmachine: domain functional-014740 has defined MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:28.204870   20007 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:7e:4b", ip: ""} in network mk-functional-014740: {Iface:virbr1 ExpiryTime:2025-11-24 09:42:57 +0000 UTC Type:0 Mac:52:54:00:84:7e:4b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:functional-014740 Clientid:01:52:54:00:84:7e:4b}
I1124 08:46:28.204895   20007 main.go:143] libmachine: domain functional-014740 has defined IP address 192.168.39.85 and MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:28.205069   20007 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-014740/id_rsa Username:docker}
I1124 08:46:28.283644   20007 ssh_runner.go:195] Run: sudo crictl images --output json
2025/11/24 08:46:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014740 image ls --format json --alsologtostderr:
[{"id":"bd528124354af07f76b98298a13253364996f3a1968721081540e882bcbcc6a6","repoDigests":["localhost/my-image@sha256:514f1fadfcdc2f899d40233cdef2bc2e53a7fd161ff7979b5ce5b0439971e2c3"],"repoTags":["localhost/my-image:functional-014740"],"size":"1468600"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d","repoDigests":["registry.k8s.io/etcd@sha256:2935cfa4bfce2fda1de6c218e1716ad170a9af6140906390d62cc3c2f4f542cd"],"repoTags":["registry.k8s.io/etcd:3.5.24-0"],"size":"66163668"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controlle
r-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"aefbdd86850e862337336cd34dd9cda8d4e8c207b95429f4c11204e5e77cd5eb","repoDigests":["docker.io/library/9e85d8281b618eacf219d9eda8b597fc0b849902b3c0359ad70adeff78ce4b07-tmp@sha256:25f28ee55714ac402aca509974f998ae17d95340010b3a8c7633f83b8fa24c6e"],"repoTags":[],"size":"1466018"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoD
igests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"cf6e95494defba470d49399296c398c88b8f58f324779ef4a886e73214342087","repoDigests":["localhost/minikube-local-cache-test@sha256:3981e404750131ea0333ea6279aed8ab965aedc415cac16b8456aed6737bc6bb"],"repoTags":["localhost/minikube-local-cache-test:functional-014740"],"size":"3330"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"527
44336"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["re
gistry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25
ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d6128552504
8f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-014740"],"size":"4945146"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014740 image ls --format json --alsologtostderr:
I1124 08:46:28.023505   19996 out.go:360] Setting OutFile to fd 1 ...
I1124 08:46:28.023771   19996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:28.023781   19996 out.go:374] Setting ErrFile to fd 2...
I1124 08:46:28.023787   19996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:28.023968   19996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:46:28.024511   19996 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:28.024622   19996 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:28.026554   19996 ssh_runner.go:195] Run: systemctl --version
I1124 08:46:28.028756   19996 main.go:143] libmachine: domain functional-014740 has defined MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:28.029122   19996 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:7e:4b", ip: ""} in network mk-functional-014740: {Iface:virbr1 ExpiryTime:2025-11-24 09:42:57 +0000 UTC Type:0 Mac:52:54:00:84:7e:4b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:functional-014740 Clientid:01:52:54:00:84:7e:4b}
I1124 08:46:28.029154   19996 main.go:143] libmachine: domain functional-014740 has defined IP address 192.168.39.85 and MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:28.029293   19996 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-014740/id_rsa Username:docker}
I1124 08:46:28.106807   19996 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014740 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: cf6e95494defba470d49399296c398c88b8f58f324779ef4a886e73214342087
repoDigests:
- localhost/minikube-local-cache-test@sha256:3981e404750131ea0333ea6279aed8ab965aedc415cac16b8456aed6737bc6bb
repoTags:
- localhost/minikube-local-cache-test:functional-014740
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-014740
size: "4945146"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d
repoDigests:
- registry.k8s.io/etcd@sha256:2935cfa4bfce2fda1de6c218e1716ad170a9af6140906390d62cc3c2f4f542cd
repoTags:
- registry.k8s.io/etcd:3.5.24-0
size: "66163668"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014740 image ls --format yaml --alsologtostderr:
I1124 08:46:24.310426   19936 out.go:360] Setting OutFile to fd 1 ...
I1124 08:46:24.310680   19936 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:24.310690   19936 out.go:374] Setting ErrFile to fd 2...
I1124 08:46:24.310694   19936 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:24.310918   19936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:46:24.311473   19936 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:24.311578   19936 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:24.313541   19936 ssh_runner.go:195] Run: systemctl --version
I1124 08:46:24.315514   19936 main.go:143] libmachine: domain functional-014740 has defined MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:24.315845   19936 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:7e:4b", ip: ""} in network mk-functional-014740: {Iface:virbr1 ExpiryTime:2025-11-24 09:42:57 +0000 UTC Type:0 Mac:52:54:00:84:7e:4b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:functional-014740 Clientid:01:52:54:00:84:7e:4b}
I1124 08:46:24.315869   19936 main.go:143] libmachine: domain functional-014740 has defined IP address 192.168.39.85 and MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:24.316001   19936 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-014740/id_rsa Username:docker}
I1124 08:46:24.393888   19936 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh pgrep buildkitd: exit status 1 (147.163046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image build -t localhost/my-image:functional-014740 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 image build -t localhost/my-image:functional-014740 testdata/build --alsologtostderr: (3.20744827s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014740 image build -t localhost/my-image:functional-014740 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> aefbdd86850
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-014740
--> bd528124354
Successfully tagged localhost/my-image:functional-014740
bd528124354af07f76b98298a13253364996f3a1968721081540e882bcbcc6a6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014740 image build -t localhost/my-image:functional-014740 testdata/build --alsologtostderr:
I1124 08:46:24.635848   19958 out.go:360] Setting OutFile to fd 1 ...
I1124 08:46:24.635945   19958 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:24.635953   19958 out.go:374] Setting ErrFile to fd 2...
I1124 08:46:24.635957   19958 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:46:24.636114   19958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
I1124 08:46:24.636692   19958 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:24.637320   19958 config.go:182] Loaded profile config "functional-014740": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 08:46:24.639293   19958 ssh_runner.go:195] Run: systemctl --version
I1124 08:46:24.641341   19958 main.go:143] libmachine: domain functional-014740 has defined MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:24.641716   19958 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:7e:4b", ip: ""} in network mk-functional-014740: {Iface:virbr1 ExpiryTime:2025-11-24 09:42:57 +0000 UTC Type:0 Mac:52:54:00:84:7e:4b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:functional-014740 Clientid:01:52:54:00:84:7e:4b}
I1124 08:46:24.641740   19958 main.go:143] libmachine: domain functional-014740 has defined IP address 192.168.39.85 and MAC address 52:54:00:84:7e:4b in network mk-functional-014740
I1124 08:46:24.641873   19958 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/functional-014740/id_rsa Username:docker}
I1124 08:46:24.720905   19958 build_images.go:162] Building image from path: /tmp/build.4044268825.tar
I1124 08:46:24.720988   19958 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 08:46:24.733341   19958 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4044268825.tar
I1124 08:46:24.738010   19958 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4044268825.tar: stat -c "%s %y" /var/lib/minikube/build/build.4044268825.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4044268825.tar': No such file or directory
I1124 08:46:24.738041   19958 ssh_runner.go:362] scp /tmp/build.4044268825.tar --> /var/lib/minikube/build/build.4044268825.tar (3072 bytes)
I1124 08:46:24.769416   19958 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4044268825
I1124 08:46:24.781055   19958 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4044268825 -xf /var/lib/minikube/build/build.4044268825.tar
I1124 08:46:24.792731   19958 crio.go:315] Building image: /var/lib/minikube/build/build.4044268825
I1124 08:46:24.792827   19958 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-014740 /var/lib/minikube/build/build.4044268825 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 08:46:27.757368   19958 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-014740 /var/lib/minikube/build/build.4044268825 --cgroup-manager=cgroupfs: (2.964516988s)
I1124 08:46:27.757441   19958 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4044268825
I1124 08:46:27.772238   19958 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4044268825.tar
I1124 08:46:27.783838   19958 build_images.go:218] Built localhost/my-image:functional-014740 from /tmp/build.4044268825.tar
I1124 08:46:27.783874   19958 build_images.go:134] succeeded building to: functional-014740
I1124 08:46:27.783878   19958 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-014740
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (4.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr: (3.862841441s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (4.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3105158360/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3105158360/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3105158360/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T" /mount1: exit status 1 (289.582743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:46:12.981104    9629 retry.go:31] will retry after 496.512389ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-014740 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3105158360/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3105158360/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014740 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3105158360/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-014740
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image load --daemon kicbase/echo-server:functional-014740 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image save kicbase/echo-server:functional-014740 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image rm kicbase/echo-server:functional-014740 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (3.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-014740 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.856771716s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (3.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-014740
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-014740 image save --daemon kicbase/echo-server:functional-014740 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-014740
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-014740
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-014740
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-014740
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (231.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1124 08:56:44.612390    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:56:55.507322    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m50.466022869s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (231.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 kubectl -- rollout status deployment/busybox: (4.90641073s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-5kjns -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-ftbw7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-rpzf2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-5kjns -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-ftbw7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-rpzf2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-5kjns -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-ftbw7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-rpzf2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-5kjns -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-5kjns -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-ftbw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-ftbw7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-rpzf2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 kubectl -- exec busybox-7b57f96db7-rpzf2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node add --alsologtostderr -v 5
E1124 09:01:00.040574    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.047035    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.058444    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.079819    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.121306    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.202827    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.364316    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:00.685964    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:01.328012    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 node add --alsologtostderr -v 5: (43.762835614s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
E1124 09:01:02.609574    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-398290 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp testdata/cp-test.txt ha-398290:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4117806120/001/cp-test_ha-398290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290:/home/docker/cp-test.txt ha-398290-m02:/home/docker/cp-test_ha-398290_ha-398290-m02.txt
E1124 09:01:05.171311    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test_ha-398290_ha-398290-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290:/home/docker/cp-test.txt ha-398290-m03:/home/docker/cp-test_ha-398290_ha-398290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test_ha-398290_ha-398290-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290:/home/docker/cp-test.txt ha-398290-m04:/home/docker/cp-test_ha-398290_ha-398290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test_ha-398290_ha-398290-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp testdata/cp-test.txt ha-398290-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4117806120/001/cp-test_ha-398290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m02:/home/docker/cp-test.txt ha-398290:/home/docker/cp-test_ha-398290-m02_ha-398290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test_ha-398290-m02_ha-398290.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m02:/home/docker/cp-test.txt ha-398290-m03:/home/docker/cp-test_ha-398290-m02_ha-398290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test_ha-398290-m02_ha-398290-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m02:/home/docker/cp-test.txt ha-398290-m04:/home/docker/cp-test_ha-398290-m02_ha-398290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test_ha-398290-m02_ha-398290-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp testdata/cp-test.txt ha-398290-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4117806120/001/cp-test_ha-398290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m03:/home/docker/cp-test.txt ha-398290:/home/docker/cp-test_ha-398290-m03_ha-398290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test.txt"
E1124 09:01:10.293056    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test_ha-398290-m03_ha-398290.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m03:/home/docker/cp-test.txt ha-398290-m02:/home/docker/cp-test_ha-398290-m03_ha-398290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test_ha-398290-m03_ha-398290-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m03:/home/docker/cp-test.txt ha-398290-m04:/home/docker/cp-test_ha-398290-m03_ha-398290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test_ha-398290-m03_ha-398290-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp testdata/cp-test.txt ha-398290-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4117806120/001/cp-test_ha-398290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m04:/home/docker/cp-test.txt ha-398290:/home/docker/cp-test_ha-398290-m04_ha-398290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290 "sudo cat /home/docker/cp-test_ha-398290-m04_ha-398290.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m04:/home/docker/cp-test.txt ha-398290-m02:/home/docker/cp-test_ha-398290-m04_ha-398290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m02 "sudo cat /home/docker/cp-test_ha-398290-m04_ha-398290-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 cp ha-398290-m04:/home/docker/cp-test.txt ha-398290-m03:/home/docker/cp-test_ha-398290-m04_ha-398290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 ssh -n ha-398290-m03 "sudo cat /home/docker/cp-test_ha-398290-m04_ha-398290-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (74.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node stop m02 --alsologtostderr -v 5
E1124 09:01:20.534669    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:41.016058    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:44.613093    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:01:55.507309    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:02:21.977585    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 node stop m02 --alsologtostderr -v 5: (1m13.522639553s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5: exit status 7 (510.157713ms)

                                                
                                                
-- stdout --
	ha-398290
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-398290-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-398290-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-398290-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:02:27.723776   25203 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:02:27.723956   25203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:02:27.723967   25203 out.go:374] Setting ErrFile to fd 2...
	I1124 09:02:27.723974   25203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:02:27.724220   25203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:02:27.724406   25203 out.go:368] Setting JSON to false
	I1124 09:02:27.724436   25203 mustload.go:66] Loading cluster: ha-398290
	I1124 09:02:27.724501   25203 notify.go:221] Checking for updates...
	I1124 09:02:27.724818   25203 config.go:182] Loaded profile config "ha-398290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:02:27.724836   25203 status.go:174] checking status of ha-398290 ...
	I1124 09:02:27.727421   25203 status.go:371] ha-398290 host status = "Running" (err=<nil>)
	I1124 09:02:27.727555   25203 host.go:66] Checking if "ha-398290" exists ...
	I1124 09:02:27.731062   25203 main.go:143] libmachine: domain ha-398290 has defined MAC address 52:54:00:cd:9b:a3 in network mk-ha-398290
	I1124 09:02:27.731593   25203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:9b:a3", ip: ""} in network mk-ha-398290: {Iface:virbr1 ExpiryTime:2025-11-24 09:56:33 +0000 UTC Type:0 Mac:52:54:00:cd:9b:a3 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-398290 Clientid:01:52:54:00:cd:9b:a3}
	I1124 09:02:27.731643   25203 main.go:143] libmachine: domain ha-398290 has defined IP address 192.168.39.133 and MAC address 52:54:00:cd:9b:a3 in network mk-ha-398290
	I1124 09:02:27.731838   25203 host.go:66] Checking if "ha-398290" exists ...
	I1124 09:02:27.732108   25203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:02:27.734817   25203 main.go:143] libmachine: domain ha-398290 has defined MAC address 52:54:00:cd:9b:a3 in network mk-ha-398290
	I1124 09:02:27.735318   25203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:9b:a3", ip: ""} in network mk-ha-398290: {Iface:virbr1 ExpiryTime:2025-11-24 09:56:33 +0000 UTC Type:0 Mac:52:54:00:cd:9b:a3 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-398290 Clientid:01:52:54:00:cd:9b:a3}
	I1124 09:02:27.735352   25203 main.go:143] libmachine: domain ha-398290 has defined IP address 192.168.39.133 and MAC address 52:54:00:cd:9b:a3 in network mk-ha-398290
	I1124 09:02:27.735530   25203 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/ha-398290/id_rsa Username:docker}
	I1124 09:02:27.825423   25203 ssh_runner.go:195] Run: systemctl --version
	I1124 09:02:27.833150   25203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:02:27.852228   25203 kubeconfig.go:125] found "ha-398290" server: "https://192.168.39.254:8443"
	I1124 09:02:27.852256   25203 api_server.go:166] Checking apiserver status ...
	I1124 09:02:27.852289   25203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:02:27.873010   25203 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W1124 09:02:27.886853   25203 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:02:27.886931   25203 ssh_runner.go:195] Run: ls
	I1124 09:02:27.892495   25203 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1124 09:02:27.898240   25203 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1124 09:02:27.898273   25203 status.go:463] ha-398290 apiserver status = Running (err=<nil>)
	I1124 09:02:27.898299   25203 status.go:176] ha-398290 status: &{Name:ha-398290 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:02:27.898321   25203 status.go:174] checking status of ha-398290-m02 ...
	I1124 09:02:27.900071   25203 status.go:371] ha-398290-m02 host status = "Stopped" (err=<nil>)
	I1124 09:02:27.900096   25203 status.go:384] host is not running, skipping remaining checks
	I1124 09:02:27.900103   25203 status.go:176] ha-398290-m02 status: &{Name:ha-398290-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:02:27.900122   25203 status.go:174] checking status of ha-398290-m03 ...
	I1124 09:02:27.901532   25203 status.go:371] ha-398290-m03 host status = "Running" (err=<nil>)
	I1124 09:02:27.901552   25203 host.go:66] Checking if "ha-398290-m03" exists ...
	I1124 09:02:27.904280   25203 main.go:143] libmachine: domain ha-398290-m03 has defined MAC address 52:54:00:e3:f8:ae in network mk-ha-398290
	I1124 09:02:27.904696   25203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e3:f8:ae", ip: ""} in network mk-ha-398290: {Iface:virbr1 ExpiryTime:2025-11-24 09:58:55 +0000 UTC Type:0 Mac:52:54:00:e3:f8:ae Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-398290-m03 Clientid:01:52:54:00:e3:f8:ae}
	I1124 09:02:27.904722   25203 main.go:143] libmachine: domain ha-398290-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:e3:f8:ae in network mk-ha-398290
	I1124 09:02:27.904901   25203 host.go:66] Checking if "ha-398290-m03" exists ...
	I1124 09:02:27.905147   25203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:02:27.907495   25203 main.go:143] libmachine: domain ha-398290-m03 has defined MAC address 52:54:00:e3:f8:ae in network mk-ha-398290
	I1124 09:02:27.907904   25203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e3:f8:ae", ip: ""} in network mk-ha-398290: {Iface:virbr1 ExpiryTime:2025-11-24 09:58:55 +0000 UTC Type:0 Mac:52:54:00:e3:f8:ae Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-398290-m03 Clientid:01:52:54:00:e3:f8:ae}
	I1124 09:02:27.907925   25203 main.go:143] libmachine: domain ha-398290-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:e3:f8:ae in network mk-ha-398290
	I1124 09:02:27.908083   25203 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/ha-398290-m03/id_rsa Username:docker}
	I1124 09:02:27.996358   25203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:02:28.018547   25203 kubeconfig.go:125] found "ha-398290" server: "https://192.168.39.254:8443"
	I1124 09:02:28.018572   25203 api_server.go:166] Checking apiserver status ...
	I1124 09:02:28.018609   25203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:02:28.039717   25203 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1795/cgroup
	W1124 09:02:28.051434   25203 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1795/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:02:28.051500   25203 ssh_runner.go:195] Run: ls
	I1124 09:02:28.057312   25203 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1124 09:02:28.062037   25203 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1124 09:02:28.062060   25203 status.go:463] ha-398290-m03 apiserver status = Running (err=<nil>)
	I1124 09:02:28.062068   25203 status.go:176] ha-398290-m03 status: &{Name:ha-398290-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:02:28.062082   25203 status.go:174] checking status of ha-398290-m04 ...
	I1124 09:02:28.063825   25203 status.go:371] ha-398290-m04 host status = "Running" (err=<nil>)
	I1124 09:02:28.063842   25203 host.go:66] Checking if "ha-398290-m04" exists ...
	I1124 09:02:28.066879   25203 main.go:143] libmachine: domain ha-398290-m04 has defined MAC address 52:54:00:a0:15:61 in network mk-ha-398290
	I1124 09:02:28.067333   25203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:15:61", ip: ""} in network mk-ha-398290: {Iface:virbr1 ExpiryTime:2025-11-24 10:00:34 +0000 UTC Type:0 Mac:52:54:00:a0:15:61 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-398290-m04 Clientid:01:52:54:00:a0:15:61}
	I1124 09:02:28.067360   25203 main.go:143] libmachine: domain ha-398290-m04 has defined IP address 192.168.39.128 and MAC address 52:54:00:a0:15:61 in network mk-ha-398290
	I1124 09:02:28.067527   25203 host.go:66] Checking if "ha-398290-m04" exists ...
	I1124 09:02:28.067793   25203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:02:28.070226   25203 main.go:143] libmachine: domain ha-398290-m04 has defined MAC address 52:54:00:a0:15:61 in network mk-ha-398290
	I1124 09:02:28.070544   25203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:15:61", ip: ""} in network mk-ha-398290: {Iface:virbr1 ExpiryTime:2025-11-24 10:00:34 +0000 UTC Type:0 Mac:52:54:00:a0:15:61 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-398290-m04 Clientid:01:52:54:00:a0:15:61}
	I1124 09:02:28.070565   25203 main.go:143] libmachine: domain ha-398290-m04 has defined IP address 192.168.39.128 and MAC address 52:54:00:a0:15:61 in network mk-ha-398290
	I1124 09:02:28.070767   25203 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/ha-398290-m04/id_rsa Username:docker}
	I1124 09:02:28.153645   25203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:02:28.171033   25203 status.go:176] ha-398290-m04 status: &{Name:ha-398290-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (74.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 node start m02 --alsologtostderr -v 5: (34.794263362s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (349.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 stop --alsologtostderr -v 5
E1124 09:03:18.570122    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:03:43.899072    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:06:00.041616    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:06:27.740745    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:06:44.616599    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:06:55.507296    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 stop --alsologtostderr -v 5: (3m52.373645038s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 start --wait true --alsologtostderr -v 5: (1m56.676259902s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (349.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 node delete m03 --alsologtostderr -v 5: (17.402811804s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 stop --alsologtostderr -v 5
E1124 09:09:47.685540    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:11:00.041978    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:11:44.616088    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:11:55.507328    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 stop --alsologtostderr -v 5: (4m3.271727817s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5: exit status 7 (63.50552ms)

                                                
                                                
-- stdout --
	ha-398290
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-398290-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-398290-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:13:16.064690   28469 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:13:16.064807   28469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:13:16.064818   28469 out.go:374] Setting ErrFile to fd 2...
	I1124 09:13:16.064825   28469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:13:16.065043   28469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:13:16.065218   28469 out.go:368] Setting JSON to false
	I1124 09:13:16.065240   28469 mustload.go:66] Loading cluster: ha-398290
	I1124 09:13:16.065312   28469 notify.go:221] Checking for updates...
	I1124 09:13:16.065574   28469 config.go:182] Loaded profile config "ha-398290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:16.065591   28469 status.go:174] checking status of ha-398290 ...
	I1124 09:13:16.067935   28469 status.go:371] ha-398290 host status = "Stopped" (err=<nil>)
	I1124 09:13:16.067952   28469 status.go:384] host is not running, skipping remaining checks
	I1124 09:13:16.067958   28469 status.go:176] ha-398290 status: &{Name:ha-398290 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:13:16.067986   28469 status.go:174] checking status of ha-398290-m02 ...
	I1124 09:13:16.069260   28469 status.go:371] ha-398290-m02 host status = "Stopped" (err=<nil>)
	I1124 09:13:16.069317   28469 status.go:384] host is not running, skipping remaining checks
	I1124 09:13:16.069328   28469 status.go:176] ha-398290-m02 status: &{Name:ha-398290-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:13:16.069340   28469 status.go:174] checking status of ha-398290-m04 ...
	I1124 09:13:16.070354   28469 status.go:371] ha-398290-m04 host status = "Stopped" (err=<nil>)
	I1124 09:13:16.070368   28469 status.go:384] host is not running, skipping remaining checks
	I1124 09:13:16.070373   28469 status.go:176] ha-398290-m04 status: &{Name:ha-398290-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m40.063365376s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 node add --control-plane --alsologtostderr -v 5
E1124 09:16:00.040635    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-398290 node add --control-plane --alsologtostderr -v 5: (1m17.483247559s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-398290 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-674447 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1124 09:16:44.616684    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:16:55.507405    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:17:23.102653    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-674447 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.381196402s)
--- PASS: TestJSONOutput/start/Command (84.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-674447 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-674447 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-674447 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-674447 --output=json --user=testUser: (6.836208568s)
--- PASS: TestJSONOutput/stop/Command (6.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-925424 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-925424 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.393788ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c732bd76-eda7-4543-b3ea-34834613339f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-925424] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ab5b549-fd6f-4705-a773-244e9d6acffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"c77faafa-4b8e-441b-a180-45de4994172f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a68a999-1af4-41fd-8922-0ed0d2a7e83c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig"}}
	{"specversion":"1.0","id":"513c44ec-713b-433d-bdd5-01aa811f8059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube"}}
	{"specversion":"1.0","id":"bb2bb235-10f1-43d1-9af2-634ce005f7cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cf8daac5-44a5-41c1-8b8d-a036d8b54a75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50d18b72-3eb6-46f0-afe0-874885a885bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-925424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-925424
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (84.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-972909 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-972909 --driver=kvm2  --container-runtime=crio: (41.003874153s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-974763 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-974763 --driver=kvm2  --container-runtime=crio: (41.144221297s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-972909
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-974763
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-974763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-974763
helpers_test.go:175: Cleaning up "first-972909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-972909
--- PASS: TestMinikubeProfile (84.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-641300 --memory=3072 --mount-string /tmp/TestMountStartserial3912006967/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-641300 --memory=3072 --mount-string /tmp/TestMountStartserial3912006967/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.299970188s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-641300 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-641300 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-657711 --memory=3072 --mount-string /tmp/TestMountStartserial3912006967/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-657711 --memory=3072 --mount-string /tmp/TestMountStartserial3912006967/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.642236066s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-657711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-657711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-641300 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-657711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-657711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-657711
E1124 09:19:58.572484    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-657711: (1.222133181s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-657711
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-657711: (17.74334057s)
--- PASS: TestMountStart/serial/RestartStopped (18.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-657711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-657711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-304941 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1124 09:21:00.040385    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:21:44.612620    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:21:55.507241    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-304941 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m9.089691381s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-304941 -- rollout status deployment/busybox: (4.526404564s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-449l7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-5ck4z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-449l7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-5ck4z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-449l7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-5ck4z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-449l7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-449l7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-5ck4z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-304941 -- exec busybox-7b57f96db7-5ck4z -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-304941 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-304941 -v=5 --alsologtostderr: (41.80899587s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-304941 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp testdata/cp-test.txt multinode-304941:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile865254845/001/cp-test_multinode-304941.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941:/home/docker/cp-test.txt multinode-304941-m02:/home/docker/cp-test_multinode-304941_multinode-304941-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m02 "sudo cat /home/docker/cp-test_multinode-304941_multinode-304941-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941:/home/docker/cp-test.txt multinode-304941-m03:/home/docker/cp-test_multinode-304941_multinode-304941-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m03 "sudo cat /home/docker/cp-test_multinode-304941_multinode-304941-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp testdata/cp-test.txt multinode-304941-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile865254845/001/cp-test_multinode-304941-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941-m02:/home/docker/cp-test.txt multinode-304941:/home/docker/cp-test_multinode-304941-m02_multinode-304941.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941 "sudo cat /home/docker/cp-test_multinode-304941-m02_multinode-304941.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941-m02:/home/docker/cp-test.txt multinode-304941-m03:/home/docker/cp-test_multinode-304941-m02_multinode-304941-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m03 "sudo cat /home/docker/cp-test_multinode-304941-m02_multinode-304941-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp testdata/cp-test.txt multinode-304941-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile865254845/001/cp-test_multinode-304941-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941-m03:/home/docker/cp-test.txt multinode-304941:/home/docker/cp-test_multinode-304941-m03_multinode-304941.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941 "sudo cat /home/docker/cp-test_multinode-304941-m03_multinode-304941.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 cp multinode-304941-m03:/home/docker/cp-test.txt multinode-304941-m02:/home/docker/cp-test_multinode-304941-m03_multinode-304941-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 ssh -n multinode-304941-m02 "sudo cat /home/docker/cp-test_multinode-304941-m03_multinode-304941-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-304941 node stop m03: (1.647428665s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-304941 status: exit status 7 (313.25013ms)

                                                
                                                
-- stdout --
	multinode-304941
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-304941-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-304941-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr: exit status 7 (323.463583ms)

                                                
                                                
-- stdout --
	multinode-304941
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-304941-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-304941-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:23:25.925792   34940 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:23:25.926052   34940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:23:25.926064   34940 out.go:374] Setting ErrFile to fd 2...
	I1124 09:23:25.926069   34940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:23:25.926276   34940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:23:25.926456   34940 out.go:368] Setting JSON to false
	I1124 09:23:25.926479   34940 mustload.go:66] Loading cluster: multinode-304941
	I1124 09:23:25.926640   34940 notify.go:221] Checking for updates...
	I1124 09:23:25.926819   34940 config.go:182] Loaded profile config "multinode-304941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:23:25.926837   34940 status.go:174] checking status of multinode-304941 ...
	I1124 09:23:25.928937   34940 status.go:371] multinode-304941 host status = "Running" (err=<nil>)
	I1124 09:23:25.928958   34940 host.go:66] Checking if "multinode-304941" exists ...
	I1124 09:23:25.931819   34940 main.go:143] libmachine: domain multinode-304941 has defined MAC address 52:54:00:65:ab:bd in network mk-multinode-304941
	I1124 09:23:25.932360   34940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:bd", ip: ""} in network mk-multinode-304941: {Iface:virbr1 ExpiryTime:2025-11-24 10:20:33 +0000 UTC Type:0 Mac:52:54:00:65:ab:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-304941 Clientid:01:52:54:00:65:ab:bd}
	I1124 09:23:25.932396   34940 main.go:143] libmachine: domain multinode-304941 has defined IP address 192.168.39.163 and MAC address 52:54:00:65:ab:bd in network mk-multinode-304941
	I1124 09:23:25.932645   34940 host.go:66] Checking if "multinode-304941" exists ...
	I1124 09:23:25.932927   34940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:23:25.935340   34940 main.go:143] libmachine: domain multinode-304941 has defined MAC address 52:54:00:65:ab:bd in network mk-multinode-304941
	I1124 09:23:25.935807   34940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:bd", ip: ""} in network mk-multinode-304941: {Iface:virbr1 ExpiryTime:2025-11-24 10:20:33 +0000 UTC Type:0 Mac:52:54:00:65:ab:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:multinode-304941 Clientid:01:52:54:00:65:ab:bd}
	I1124 09:23:25.935843   34940 main.go:143] libmachine: domain multinode-304941 has defined IP address 192.168.39.163 and MAC address 52:54:00:65:ab:bd in network mk-multinode-304941
	I1124 09:23:25.936042   34940 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/multinode-304941/id_rsa Username:docker}
	I1124 09:23:26.018005   34940 ssh_runner.go:195] Run: systemctl --version
	I1124 09:23:26.024357   34940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:23:26.040997   34940 kubeconfig.go:125] found "multinode-304941" server: "https://192.168.39.163:8443"
	I1124 09:23:26.041046   34940 api_server.go:166] Checking apiserver status ...
	I1124 09:23:26.041109   34940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:23:26.059644   34940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W1124 09:23:26.071632   34940 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:23:26.071702   34940 ssh_runner.go:195] Run: ls
	I1124 09:23:26.077306   34940 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1124 09:23:26.082842   34940 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I1124 09:23:26.082866   34940 status.go:463] multinode-304941 apiserver status = Running (err=<nil>)
	I1124 09:23:26.082875   34940 status.go:176] multinode-304941 status: &{Name:multinode-304941 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:23:26.082899   34940 status.go:174] checking status of multinode-304941-m02 ...
	I1124 09:23:26.084526   34940 status.go:371] multinode-304941-m02 host status = "Running" (err=<nil>)
	I1124 09:23:26.084549   34940 host.go:66] Checking if "multinode-304941-m02" exists ...
	I1124 09:23:26.087045   34940 main.go:143] libmachine: domain multinode-304941-m02 has defined MAC address 52:54:00:98:3b:89 in network mk-multinode-304941
	I1124 09:23:26.087473   34940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:98:3b:89", ip: ""} in network mk-multinode-304941: {Iface:virbr1 ExpiryTime:2025-11-24 10:21:59 +0000 UTC Type:0 Mac:52:54:00:98:3b:89 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-304941-m02 Clientid:01:52:54:00:98:3b:89}
	I1124 09:23:26.087503   34940 main.go:143] libmachine: domain multinode-304941-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:98:3b:89 in network mk-multinode-304941
	I1124 09:23:26.087651   34940 host.go:66] Checking if "multinode-304941-m02" exists ...
	I1124 09:23:26.087833   34940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:23:26.089776   34940 main.go:143] libmachine: domain multinode-304941-m02 has defined MAC address 52:54:00:98:3b:89 in network mk-multinode-304941
	I1124 09:23:26.090137   34940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:98:3b:89", ip: ""} in network mk-multinode-304941: {Iface:virbr1 ExpiryTime:2025-11-24 10:21:59 +0000 UTC Type:0 Mac:52:54:00:98:3b:89 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-304941-m02 Clientid:01:52:54:00:98:3b:89}
	I1124 09:23:26.090169   34940 main.go:143] libmachine: domain multinode-304941-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:98:3b:89 in network mk-multinode-304941
	I1124 09:23:26.090291   34940 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21978-5665/.minikube/machines/multinode-304941-m02/id_rsa Username:docker}
	I1124 09:23:26.168980   34940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:23:26.187355   34940 status.go:176] multinode-304941-m02 status: &{Name:multinode-304941-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:23:26.187393   34940 status.go:174] checking status of multinode-304941-m03 ...
	I1124 09:23:26.189085   34940 status.go:371] multinode-304941-m03 host status = "Stopped" (err=<nil>)
	I1124 09:23:26.189108   34940 status.go:384] host is not running, skipping remaining checks
	I1124 09:23:26.189115   34940 status.go:176] multinode-304941-m03 status: &{Name:multinode-304941-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-304941 node start m03 -v=5 --alsologtostderr: (40.892541658s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (294.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-304941
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-304941
E1124 09:26:00.043507    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:26:27.688398    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:26:44.612602    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:26:55.506595    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-304941: (2m51.014351791s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-304941 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-304941 --wait=true -v=5 --alsologtostderr: (2m3.481273466s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-304941
--- PASS: TestMultiNode/serial/RestartKeepsNodes (294.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-304941 node delete m03: (2.212092899s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (169.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 stop
E1124 09:31:00.044096    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:31:44.616070    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-304941 stop: (2m49.484338137s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-304941 status: exit status 7 (61.99211ms)

                                                
                                                
-- stdout --
	multinode-304941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-304941-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr: exit status 7 (62.954881ms)

                                                
                                                
-- stdout --
	multinode-304941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-304941-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:31:54.458140   37428 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:31:54.458403   37428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:54.458411   37428 out.go:374] Setting ErrFile to fd 2...
	I1124 09:31:54.458415   37428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:31:54.458621   37428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:31:54.458778   37428 out.go:368] Setting JSON to false
	I1124 09:31:54.458799   37428 mustload.go:66] Loading cluster: multinode-304941
	I1124 09:31:54.458927   37428 notify.go:221] Checking for updates...
	I1124 09:31:54.459216   37428 config.go:182] Loaded profile config "multinode-304941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:31:54.459240   37428 status.go:174] checking status of multinode-304941 ...
	I1124 09:31:54.461610   37428 status.go:371] multinode-304941 host status = "Stopped" (err=<nil>)
	I1124 09:31:54.461624   37428 status.go:384] host is not running, skipping remaining checks
	I1124 09:31:54.461629   37428 status.go:176] multinode-304941 status: &{Name:multinode-304941 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 09:31:54.461644   37428 status.go:174] checking status of multinode-304941-m02 ...
	I1124 09:31:54.463176   37428 status.go:371] multinode-304941-m02 host status = "Stopped" (err=<nil>)
	I1124 09:31:54.463195   37428 status.go:384] host is not running, skipping remaining checks
	I1124 09:31:54.463201   37428 status.go:176] multinode-304941-m02 status: &{Name:multinode-304941-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (169.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-304941 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1124 09:31:55.507035    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-304941 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m23.544973395s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-304941 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-304941
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-304941-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-304941-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (73.026693ms)

                                                
                                                
-- stdout --
	* [multinode-304941-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-304941-m02' is duplicated with machine name 'multinode-304941-m02' in profile 'multinode-304941'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-304941-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-304941-m03 --driver=kvm2  --container-runtime=crio: (38.620372979s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-304941
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-304941: exit status 80 (210.220465ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-304941 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-304941-m03 already exists in multinode-304941-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-304941-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.75s)

                                                
                                    
x
+
TestScheduledStopUnix (109.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-609817 --memory=3072 --driver=kvm2  --container-runtime=crio
E1124 09:36:38.576625    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:36:44.613286    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:36:55.507324    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-609817 --memory=3072 --driver=kvm2  --container-runtime=crio: (37.97157077s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-609817 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:37:10.830611   39964 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:37:10.830882   39964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:10.830892   39964 out.go:374] Setting ErrFile to fd 2...
	I1124 09:37:10.830896   39964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:10.831111   39964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:37:10.831369   39964 out.go:368] Setting JSON to false
	I1124 09:37:10.831480   39964 mustload.go:66] Loading cluster: scheduled-stop-609817
	I1124 09:37:10.831774   39964 config.go:182] Loaded profile config "scheduled-stop-609817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:37:10.831832   39964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/config.json ...
	I1124 09:37:10.832000   39964 mustload.go:66] Loading cluster: scheduled-stop-609817
	I1124 09:37:10.832094   39964 config.go:182] Loaded profile config "scheduled-stop-609817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-609817 -n scheduled-stop-609817
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-609817 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:37:11.123739   40010 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:37:11.123963   40010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:11.123972   40010 out.go:374] Setting ErrFile to fd 2...
	I1124 09:37:11.123975   40010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:11.124182   40010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:37:11.124399   40010 out.go:368] Setting JSON to false
	I1124 09:37:11.124585   40010 daemonize_unix.go:73] killing process 39999 as it is an old scheduled stop
	I1124 09:37:11.124683   40010 mustload.go:66] Loading cluster: scheduled-stop-609817
	I1124 09:37:11.124988   40010 config.go:182] Loaded profile config "scheduled-stop-609817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:37:11.125050   40010 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/config.json ...
	I1124 09:37:11.125237   40010 mustload.go:66] Loading cluster: scheduled-stop-609817
	I1124 09:37:11.125334   40010 config.go:182] Loaded profile config "scheduled-stop-609817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 09:37:11.130810    9629 retry.go:31] will retry after 101.395µs: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.131966    9629 retry.go:31] will retry after 174.751µs: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.133113    9629 retry.go:31] will retry after 160.431µs: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.134203    9629 retry.go:31] will retry after 361.906µs: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.135331    9629 retry.go:31] will retry after 494.983µs: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.136462    9629 retry.go:31] will retry after 1.089639ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.137614    9629 retry.go:31] will retry after 1.405145ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.139814    9629 retry.go:31] will retry after 1.825393ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.142038    9629 retry.go:31] will retry after 2.359366ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.145271    9629 retry.go:31] will retry after 4.253679ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.150582    9629 retry.go:31] will retry after 4.247076ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.155809    9629 retry.go:31] will retry after 9.878835ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.166041    9629 retry.go:31] will retry after 8.041172ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.174231    9629 retry.go:31] will retry after 24.729268ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.199531    9629 retry.go:31] will retry after 20.643884ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
I1124 09:37:11.220806    9629 retry.go:31] will retry after 44.025733ms: open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-609817 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-609817 -n scheduled-stop-609817
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-609817
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-609817 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:37:36.817604   40159 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:37:36.817877   40159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:36.817887   40159 out.go:374] Setting ErrFile to fd 2...
	I1124 09:37:36.817891   40159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:36.818134   40159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:37:36.818444   40159 out.go:368] Setting JSON to false
	I1124 09:37:36.818533   40159 mustload.go:66] Loading cluster: scheduled-stop-609817
	I1124 09:37:36.818889   40159 config.go:182] Loaded profile config "scheduled-stop-609817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:37:36.818964   40159 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/scheduled-stop-609817/config.json ...
	I1124 09:37:36.819192   40159 mustload.go:66] Loading cluster: scheduled-stop-609817
	I1124 09:37:36.819309   40159 config.go:182] Loaded profile config "scheduled-stop-609817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-609817
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-609817: exit status 7 (59.527398ms)

                                                
                                                
-- stdout --
	scheduled-stop-609817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-609817 -n scheduled-stop-609817
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-609817 -n scheduled-stop-609817: exit status 7 (57.991988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-609817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-609817
--- PASS: TestScheduledStopUnix (109.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (118.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.762848004 start -p running-upgrade-825648 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.762848004 start -p running-upgrade-825648 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m5.058540733s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-825648 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-825648 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.881244753s)
helpers_test.go:175: Cleaning up "running-upgrade-825648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-825648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-825648: (1.632052731s)
--- PASS: TestRunningBinaryUpgrade (118.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (181.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.66183899s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-791432
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-791432: (2.055533746s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-791432 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-791432 status --format={{.Host}}: exit status 7 (76.029947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.690220707s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-791432 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (134.75912ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-791432] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-791432
	    minikube start -p kubernetes-upgrade-791432 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7914322 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-791432 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-791432 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.315691498s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-791432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-791432
--- PASS: TestKubernetesUpgrade (181.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (151.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2688821512 start -p stopped-upgrade-023022 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2688821512 start -p stopped-upgrade-023022 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m34.684206407s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2688821512 -p stopped-upgrade-023022 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2688821512 -p stopped-upgrade-023022 stop: (1.793449785s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-023022 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-023022 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.223930232s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (151.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-507741 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-507741 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (125.292844ms)

                                                
                                                
-- stdout --
	* [false-507741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:38:25.876942   41244 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:38:25.877039   41244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:38:25.877050   41244 out.go:374] Setting ErrFile to fd 2...
	I1124 09:38:25.877058   41244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:38:25.877275   41244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-5665/.minikube/bin
	I1124 09:38:25.877809   41244 out.go:368] Setting JSON to false
	I1124 09:38:25.878725   41244 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4842,"bootTime":1763972264,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:38:25.878779   41244 start.go:143] virtualization: kvm guest
	I1124 09:38:25.880729   41244 out.go:179] * [false-507741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:38:25.882038   41244 notify.go:221] Checking for updates...
	I1124 09:38:25.882072   41244 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:38:25.883341   41244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:38:25.884448   41244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	I1124 09:38:25.885616   41244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	I1124 09:38:25.886832   41244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:38:25.891397   41244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:38:25.893542   41244 config.go:182] Loaded profile config "kubernetes-upgrade-791432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 09:38:25.893692   41244 config.go:182] Loaded profile config "offline-crio-776898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:38:25.893829   41244 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:38:25.926988   41244 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 09:38:25.928345   41244 start.go:309] selected driver: kvm2
	I1124 09:38:25.928362   41244 start.go:927] validating driver "kvm2" against <nil>
	I1124 09:38:25.928373   41244 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:38:25.930130   41244 out.go:203] 
	W1124 09:38:25.931392   41244 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 09:38:25.932356   41244 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-507741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-507741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-507741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507741"

                                                
                                                
----------------------- debugLogs end: false-507741 [took: 3.357046989s] --------------------------------
helpers_test.go:175: Cleaning up "false-507741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-507741
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestPause/serial/Start (76.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-377882 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-377882 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m16.422306261s)
--- PASS: TestPause/serial/Start (76.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-544416 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-544416 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (111.912312ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-544416] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-5665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-5665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (60.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-544416 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-544416 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.078237531s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-544416 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (60.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-023022
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-023022: (1.20184578s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestISOImage/Setup (33.17s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-554599 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-554599 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.172919172s)
--- PASS: TestISOImage/Setup (33.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-544416 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1124 09:41:44.614335    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-544416 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (32.30013081s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-544416 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-544416 status -o json: exit status 2 (199.356757ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-544416","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-544416
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.32s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.34s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.34s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-544416 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-544416 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.631705006s)
--- PASS: TestNoKubernetes/serial/Start (55.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21978-5665/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-544416 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-544416 "sudo systemctl is-active --quiet service kubelet": exit status 1 (169.586118ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-544416
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-544416: (1.351421187s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-544416 --driver=kvm2  --container-runtime=crio
E1124 09:43:07.690396    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-544416 --driver=kvm2  --container-runtime=crio: (32.848471749s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-544416 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-544416 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.099159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m31.074034802s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (91.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (109.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m49.023411534s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (109.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-960867 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17d1d3d6-9da9-4d49-b629-0c65f78b2ad6] Pending
helpers_test.go:352: "busybox" [17d1d3d6-9da9-4d49-b629-0c65f78b2ad6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [17d1d3d6-9da9-4d49-b629-0c65f78b2ad6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003774397s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-960867 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-960867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-960867 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-960867 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-960867 --alsologtostderr -v=3: (1m24.042960584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-778378 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [248b222a-4ed0-46b2-97da-3f107ddede66] Pending
helpers_test.go:352: "busybox" [248b222a-4ed0-46b2-97da-3f107ddede66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [248b222a-4ed0-46b2-97da-3f107ddede66] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005420334s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-778378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-778378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-778378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (87.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-778378 --alsologtostderr -v=3
E1124 09:46:00.040346    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-778378 --alsologtostderr -v=3: (1m27.376076332s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (87.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-960867 -n old-k8s-version-960867
E1124 09:46:44.612551    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-960867 -n old-k8s-version-960867: exit status 7 (62.866093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-960867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-960867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (46.239290457s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-960867 -n old-k8s-version-960867
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-626350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1124 09:46:55.507118    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-626350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m34.117438575s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778378 -n no-preload-778378
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778378 -n no-preload-778378: exit status 7 (72.621139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-778378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (65.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-778378 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m5.291534141s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778378 -n no-preload-778378
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (65.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c2k52" [bf5d3b23-df45-4a82-a0a2-1345e71163e6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c2k52" [bf5d3b23-df45-4a82-a0a2-1345e71163e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.006250292s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c2k52" [bf5d3b23-df45-4a82-a0a2-1345e71163e6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004264583s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-960867 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-960867 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-960867 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-960867 -n old-k8s-version-960867
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-960867 -n old-k8s-version-960867: exit status 2 (211.403275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-960867 -n old-k8s-version-960867
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-960867 -n old-k8s-version-960867: exit status 2 (232.157606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-960867 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-960867 -n old-k8s-version-960867
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-960867 -n old-k8s-version-960867
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-728268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-728268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m22.757802169s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-476738 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-476738 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m15.605728873s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-wc68z" [879b0d65-4d66-4ea0-9c17-63e8935bd21d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005066066s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-wc68z" [879b0d65-4d66-4ea0-9c17-63e8935bd21d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00509404s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-778378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-626350 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7da0e0c5-c20c-4569-a279-d49cfe516a97] Pending
helpers_test.go:352: "busybox" [7da0e0c5-c20c-4569-a279-d49cfe516a97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7da0e0c5-c20c-4569-a279-d49cfe516a97] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004846918s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-626350 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-778378 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:48:29.147131    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:48:29.450268    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:48:29.753075    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-778378 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778378 -n no-preload-778378
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778378 -n no-preload-778378: exit status 2 (251.059233ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-778378 -n no-preload-778378
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-778378 -n no-preload-778378: exit status 2 (285.463817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-778378 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778378 -n no-preload-778378
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-778378 -n no-preload-778378
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m25.843852568s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-626350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-626350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.345335149s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-626350 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-626350 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-626350 --alsologtostderr -v=3: (1m27.530273496s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-476738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-476738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.118459407s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-476738 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-476738 --alsologtostderr -v=3: (8.082376544s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-728268 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5f1e03e8-7958-49d9-83ab-2be732921f93] Pending
helpers_test.go:352: "busybox" [5f1e03e8-7958-49d9-83ab-2be732921f93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5f1e03e8-7958-49d9-83ab-2be732921f93] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004155217s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-728268 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-476738 -n newest-cni-476738
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-476738 -n newest-cni-476738: exit status 7 (62.270641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-476738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-476738 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-476738 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (40.90588423s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-476738 -n newest-cni-476738
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-728268 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-728268 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-728268 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-728268 --alsologtostderr -v=3: (1m24.343128134s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-507741 "pgrep -a kubelet"
I1124 09:50:00.285073    9629 config.go:182] Loaded profile config "auto-507741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-507741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q7q8c" [0650890b-ecca-4c96-a425-4b1503c25645] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q7q8c" [0650890b-ecca-4c96-a425-4b1503c25645] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004825377s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-476738 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-626350 -n embed-certs-626350
I1124 09:50:08.412664    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-626350 -n embed-certs-626350: exit status 7 (75.806968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-626350 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-626350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
I1124 09:50:08.729710    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:50:09.017090    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
E1124 09:50:09.284756    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:09.291125    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-626350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (48.382363276s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-626350 -n embed-certs-626350
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-476738 --alsologtostderr -v=1
E1124 09:50:09.302967    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:09.324366    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:09.365840    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:09.447091    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:09.608604    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:09.930644    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-476738 -n newest-cni-476738
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-476738 -n newest-cni-476738: exit status 2 (228.011423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-476738 -n newest-cni-476738
E1124 09:50:10.572564    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-476738 -n newest-cni-476738: exit status 2 (267.292781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-476738 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-476738 --alsologtostderr -v=1: (1.084007119s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-476738 -n newest-cni-476738
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-476738 -n newest-cni-476738
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-507741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1124 09:50:11.854895    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1124 09:50:14.417538    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:19.539088    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m17.860329343s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1124 09:50:29.780504    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.563313    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.569718    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.582002    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.603825    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.645505    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.727495    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:33.889260    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:34.211179    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:34.854139    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:36.136435    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:38.698690    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:43.106470    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:43.820061    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:50.262278    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:54.062187    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m39.715275453s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ftj4r" [09e2a85d-c35b-40c6-a6fd-01a4a1791889] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ftj4r" [09e2a85d-c35b-40c6-a6fd-01a4a1791889] Running
E1124 09:51:00.040408    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-014740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005373334s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268: exit status 7 (64.780561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-728268 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-728268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-728268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (57.393773136s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ftj4r" [09e2a85d-c35b-40c6-a6fd-01a4a1791889] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004261801s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-626350 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-626350 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:51:10.563407    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:51:10.852590    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:51:11.160065    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-626350 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-626350 --alsologtostderr -v=1: (1.090319278s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-626350 -n embed-certs-626350
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-626350 -n embed-certs-626350: exit status 2 (276.576159ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-626350 -n embed-certs-626350
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-626350 -n embed-certs-626350: exit status 2 (257.318484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-626350 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-626350 --alsologtostderr -v=1: (1.426201564s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-626350 -n embed-certs-626350
E1124 09:51:14.543781    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-626350 -n embed-certs-626350
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1124 09:51:31.224311    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/old-k8s-version-960867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m22.438918456s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-llz4j" [f14e7220-25b2-41c0-8120-876f71322296] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005797309s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-507741 "pgrep -a kubelet"
I1124 09:51:37.741767    9629 config.go:182] Loaded profile config "kindnet-507741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-507741 replace --force -f testdata/netcat-deployment.yaml
I1124 09:51:38.081170    9629 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h4zn5" [a7840a6e-6cbd-472a-be6f-662d0de372ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h4zn5" [a7840a6e-6cbd-472a-be6f-662d0de372ca] Running
E1124 09:51:44.612294    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/addons-076740/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003711588s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-507741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b9pgn" [91a38314-fbdf-4257-8cde-e0c6e9c373a1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b9pgn" [91a38314-fbdf-4257-8cde-e0c6e9c373a1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004109367s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-58qgb" [b604b185-2f1e-4780-bbd4-5ade4a6ba8f5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-58qgb" [b604b185-2f1e-4780-bbd4-5ade4a6ba8f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004242638s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m27.262303671s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-507741 "pgrep -a kubelet"
I1124 09:52:12.206638    9629 config.go:182] Loaded profile config "calico-507741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-507741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hc4qp" [44ac500d-e669-4961-b5a7-77ecdb758a0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hc4qp" [44ac500d-e669-4961-b5a7-77ecdb758a0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004564691s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b9pgn" [91a38314-fbdf-4257-8cde-e0c6e9c373a1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004262434s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-728268 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-728268 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:52:21.688575    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:52:22.289772    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:52:22.655554    9629 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-728268 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268: exit status 2 (262.246317ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268: exit status 2 (224.046898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-728268 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-728268 -n default-k8s-diff-port-728268
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-507741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.553275568s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-507741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-507741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9tp7j" [d591c9b5-92fe-4720-9650-736c44ce9a54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9tp7j" [d591c9b5-92fe-4720-9650-736c44ce9a54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005271212s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-507741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.189401132s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-507741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-554599 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1124 09:53:17.427290    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/no-preload-778378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:53:18.578013    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/functional-843072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-507741 "pgrep -a kubelet"
I1124 09:53:34.494320    9629 config.go:182] Loaded profile config "enable-default-cni-507741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-507741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8d8tt" [8b86cd51-3329-4883-882e-227beb6baefd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8d8tt" [8b86cd51-3329-4883-882e-227beb6baefd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003731517s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6z5cq" [168a5f25-1a6f-4fc7-9c9b-3184e0df7276] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004591727s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-507741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-507741 "pgrep -a kubelet"
I1124 09:53:46.854415    9629 config.go:182] Loaded profile config "flannel-507741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-507741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c4psb" [c611247b-543b-4b0a-a783-b653a3269c4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c4psb" [c611247b-543b-4b0a-a783-b653a3269c4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004795919s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-507741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-507741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-507741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rh78d" [7fc0c285-fa75-4210-be28-b9052e21cd8e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rh78d" [7fc0c285-fa75-4210-be28-b9052e21cd8e] Running
E1124 09:54:22.088612    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:54:22.095010    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:54:22.106446    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:54:22.127911    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:54:22.169368    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:54:22.250902    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003879856s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-507741 exec deployment/netcat -- nslookup kubernetes.default
E1124 09:54:22.413097    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-507741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1124 09:54:22.734813    9629 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-5665/.minikube/profiles/default-k8s-diff-port-728268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (51/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.14
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.29
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
362 TestNetworkPlugins/group/kubenet 3.45
363 TestStartStop/group/disable-driver-mounts 0.18
374 TestNetworkPlugins/group/cilium 3.86
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1124 08:29:27.611904    9629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1124 08:29:27.735514    9629 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1124 08:29:27.750750    9629 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-076740 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-507741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-507741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-507741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507741"

                                                
                                                
----------------------- debugLogs end: kubenet-507741 [took: 3.295427322s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-507741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-507741
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-756097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-756097
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-507741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-507741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-507741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-507741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507741"

                                                
                                                
----------------------- debugLogs end: cilium-507741 [took: 3.664743799s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-507741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-507741
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard