Test Report: KVM_Linux_crio 21894

                    
                      8496c1ca7722bf7d926446d0df8cf9af55d7419f:2025-11-15:42336
                    
                

Test fail (3/346)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.99
246 TestPreload 126.33
282 TestPause/serial/SecondStartNoReconfiguration 53
x
+
TestAddons/parallel/Ingress (157.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-965866 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-965866 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-965866 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [55acc31d-0a00-4815-9a94-f5347b56d0a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [55acc31d-0a00-4815-9a94-f5347b56d0a0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003905953s
I1115 09:41:11.517029  416801 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-965866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.145437041s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-965866 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.252
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-965866 -n addons-965866
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 logs -n 25: (1.258442232s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-186898                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-186898 │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │ 15 Nov 25 09:38 UTC │
	│ start   │ --download-only -p binary-mirror-456316 --alsologtostderr --binary-mirror http://127.0.0.1:36499 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-456316 │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │                     │
	│ delete  │ -p binary-mirror-456316                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-456316 │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │ 15 Nov 25 09:38 UTC │
	│ addons  │ enable dashboard -p addons-965866                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │                     │
	│ addons  │ disable dashboard -p addons-965866                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │                     │
	│ start   │ -p addons-965866 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ enable headlamp -p addons-965866 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ ssh     │ addons-965866 ssh cat /opt/local-path-provisioner/pvc-453b0945-0433-401e-a86e-37483ec44b20_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:41 UTC │
	│ ip      │ addons-965866 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:40 UTC │ 15 Nov 25 09:40 UTC │
	│ addons  │ addons-965866 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-965866                                                                                                                                                                                                                                                                                                                                                                                         │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ addons  │ addons-965866 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ ssh     │ addons-965866 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │                     │
	│ addons  │ addons-965866 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ addons  │ addons-965866 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ addons  │ addons-965866 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ ip      │ addons-965866 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-965866        │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:38:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:38:03.677389  417416 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:38:03.677703  417416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:38:03.677714  417416 out.go:374] Setting ErrFile to fd 2...
	I1115 09:38:03.677719  417416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:38:03.677924  417416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 09:38:03.678517  417416 out.go:368] Setting JSON to false
	I1115 09:38:03.679391  417416 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4831,"bootTime":1763194653,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:38:03.679489  417416 start.go:143] virtualization: kvm guest
	I1115 09:38:03.681326  417416 out.go:179] * [addons-965866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:38:03.682639  417416 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:38:03.682637  417416 notify.go:221] Checking for updates...
	I1115 09:38:03.684163  417416 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:38:03.685570  417416 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 09:38:03.686922  417416 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:38:03.688034  417416 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:38:03.689257  417416 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:38:03.690678  417416 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:38:03.721717  417416 out.go:179] * Using the kvm2 driver based on user configuration
	I1115 09:38:03.722707  417416 start.go:309] selected driver: kvm2
	I1115 09:38:03.722722  417416 start.go:930] validating driver "kvm2" against <nil>
	I1115 09:38:03.722734  417416 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:38:03.723501  417416 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:38:03.723792  417416 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:38:03.723824  417416 cni.go:84] Creating CNI manager for ""
	I1115 09:38:03.723889  417416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:38:03.723905  417416 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1115 09:38:03.723972  417416 start.go:353] cluster config:
	{Name:addons-965866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-965866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1115 09:38:03.724081  417416 iso.go:125] acquiring lock: {Name:mke3d0b50f750b07aabde39a6bc9fa707eafad32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:38:03.725583  417416 out.go:179] * Starting "addons-965866" primary control-plane node in "addons-965866" cluster
	I1115 09:38:03.726680  417416 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:38:03.726709  417416 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:38:03.726725  417416 cache.go:65] Caching tarball of preloaded images
	I1115 09:38:03.726809  417416 preload.go:238] Found /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:38:03.726819  417416 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:38:03.727118  417416 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/config.json ...
	I1115 09:38:03.727139  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/config.json: {Name:mk4f1916ef5f2b81d53c5b75362a331d9bddcedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:03.727739  417416 start.go:360] acquireMachinesLock for addons-965866: {Name:mk50d09d451dfb6834d3dcf4331d8b4da7231bd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 09:38:03.727803  417416 start.go:364] duration metric: took 46.094µs to acquireMachinesLock for "addons-965866"
	I1115 09:38:03.727822  417416 start.go:93] Provisioning new machine with config: &{Name:addons-965866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-965866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:38:03.727919  417416 start.go:125] createHost starting for "" (driver="kvm2")
	I1115 09:38:03.729320  417416 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1115 09:38:03.729499  417416 start.go:159] libmachine.API.Create for "addons-965866" (driver="kvm2")
	I1115 09:38:03.729528  417416 client.go:173] LocalClient.Create starting
	I1115 09:38:03.729629  417416 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem
	I1115 09:38:03.821119  417416 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem
	I1115 09:38:04.141437  417416 main.go:143] libmachine: creating domain...
	I1115 09:38:04.141457  417416 main.go:143] libmachine: creating network...
	I1115 09:38:04.142806  417416 main.go:143] libmachine: found existing default network
	I1115 09:38:04.143067  417416 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1115 09:38:04.143688  417416 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a5f0e0}
	I1115 09:38:04.143815  417416 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-965866</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1115 09:38:04.150108  417416 main.go:143] libmachine: creating private network mk-addons-965866 192.168.39.0/24...
	I1115 09:38:04.218236  417416 main.go:143] libmachine: private network mk-addons-965866 192.168.39.0/24 created
	I1115 09:38:04.218564  417416 main.go:143] libmachine: <network>
	  <name>mk-addons-965866</name>
	  <uuid>1fe1a607-394d-48ef-98c3-a4fa4849c49b</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:30:b8:2b'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1115 09:38:04.218612  417416 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866 ...
	I1115 09:38:04.218641  417416 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21894-412813/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1115 09:38:04.218680  417416 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:38:04.218775  417416 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21894-412813/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21894-412813/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1115 09:38:04.522782  417416 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa...
	I1115 09:38:04.760045  417416 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/addons-965866.rawdisk...
	I1115 09:38:04.760104  417416 main.go:143] libmachine: Writing magic tar header
	I1115 09:38:04.760127  417416 main.go:143] libmachine: Writing SSH key tar header
	I1115 09:38:04.760222  417416 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866 ...
	I1115 09:38:04.760351  417416 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866
	I1115 09:38:04.760388  417416 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866 (perms=drwx------)
	I1115 09:38:04.760407  417416 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21894-412813/.minikube/machines
	I1115 09:38:04.760424  417416 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21894-412813/.minikube/machines (perms=drwxr-xr-x)
	I1115 09:38:04.760458  417416 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:38:04.760477  417416 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21894-412813/.minikube (perms=drwxr-xr-x)
	I1115 09:38:04.760491  417416 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21894-412813
	I1115 09:38:04.760504  417416 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21894-412813 (perms=drwxrwxr-x)
	I1115 09:38:04.760523  417416 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1115 09:38:04.760540  417416 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1115 09:38:04.760557  417416 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1115 09:38:04.760568  417416 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1115 09:38:04.760582  417416 main.go:143] libmachine: checking permissions on dir: /home
	I1115 09:38:04.760595  417416 main.go:143] libmachine: skipping /home - not owner
	I1115 09:38:04.760604  417416 main.go:143] libmachine: defining domain...
	I1115 09:38:04.762045  417416 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-965866</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/addons-965866.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-965866'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1115 09:38:04.770295  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:0b:ca:ad in network default
	I1115 09:38:04.770902  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:04.770918  417416 main.go:143] libmachine: starting domain...
	I1115 09:38:04.770922  417416 main.go:143] libmachine: ensuring networks are active...
	I1115 09:38:04.771557  417416 main.go:143] libmachine: Ensuring network default is active
	I1115 09:38:04.771940  417416 main.go:143] libmachine: Ensuring network mk-addons-965866 is active
	I1115 09:38:04.772485  417416 main.go:143] libmachine: getting domain XML...
	I1115 09:38:04.773470  417416 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-965866</name>
	  <uuid>267ed3b3-9030-4a58-989f-ff9e874a9a56</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/addons-965866.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ba:72:38'/>
	      <source network='mk-addons-965866'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0b:ca:ad'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1115 09:38:06.101886  417416 main.go:143] libmachine: waiting for domain to start...
	I1115 09:38:06.103491  417416 main.go:143] libmachine: domain is now running
	I1115 09:38:06.103517  417416 main.go:143] libmachine: waiting for IP...
	I1115 09:38:06.104365  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:06.104923  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:06.104943  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:06.105275  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:06.105332  417416 retry.go:31] will retry after 215.753191ms: waiting for domain to come up
	I1115 09:38:06.322800  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:06.323366  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:06.323383  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:06.323714  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:06.323760  417416 retry.go:31] will retry after 240.789192ms: waiting for domain to come up
	I1115 09:38:06.566479  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:06.567212  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:06.567237  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:06.567554  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:06.567609  417416 retry.go:31] will retry after 478.846236ms: waiting for domain to come up
	I1115 09:38:07.048280  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:07.048830  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:07.048848  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:07.049137  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:07.049183  417416 retry.go:31] will retry after 516.520971ms: waiting for domain to come up
	I1115 09:38:07.567040  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:07.567562  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:07.567584  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:07.567893  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:07.567943  417416 retry.go:31] will retry after 701.613382ms: waiting for domain to come up
	I1115 09:38:08.271377  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:08.271901  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:08.271919  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:08.272179  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:08.272219  417416 retry.go:31] will retry after 937.232002ms: waiting for domain to come up
	I1115 09:38:09.211555  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:09.212129  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:09.212149  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:09.212458  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:09.212500  417416 retry.go:31] will retry after 871.341734ms: waiting for domain to come up
	I1115 09:38:10.085590  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:10.086214  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:10.086234  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:10.086589  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:10.086633  417416 retry.go:31] will retry after 1.086257939s: waiting for domain to come up
	I1115 09:38:11.174907  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:11.175539  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:11.175568  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:11.175955  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:11.175999  417416 retry.go:31] will retry after 1.562476165s: waiting for domain to come up
	I1115 09:38:12.740792  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:12.741349  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:12.741367  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:12.741686  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:12.741728  417416 retry.go:31] will retry after 2.207581247s: waiting for domain to come up
	I1115 09:38:14.951481  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:14.952042  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:14.952064  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:14.952368  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:14.952412  417416 retry.go:31] will retry after 2.711227075s: waiting for domain to come up
	I1115 09:38:17.667386  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:17.667973  417416 main.go:143] libmachine: no network interface addresses found for domain addons-965866 (source=lease)
	I1115 09:38:17.667990  417416 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:38:17.668304  417416 main.go:143] libmachine: unable to find current IP address of domain addons-965866 in network mk-addons-965866 (interfaces detected: [])
	I1115 09:38:17.668347  417416 retry.go:31] will retry after 2.419535256s: waiting for domain to come up
	I1115 09:38:20.090068  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.090781  417416 main.go:143] libmachine: domain addons-965866 has current primary IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.090801  417416 main.go:143] libmachine: found domain IP: 192.168.39.252
	I1115 09:38:20.090811  417416 main.go:143] libmachine: reserving static IP address...
	I1115 09:38:20.091299  417416 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-965866", mac: "52:54:00:ba:72:38", ip: "192.168.39.252"} in network mk-addons-965866
	I1115 09:38:20.290443  417416 main.go:143] libmachine: reserved static IP address 192.168.39.252 for domain addons-965866
	I1115 09:38:20.290467  417416 main.go:143] libmachine: waiting for SSH...
	I1115 09:38:20.290474  417416 main.go:143] libmachine: Getting to WaitForSSH function...
	I1115 09:38:20.293887  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.294462  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.294501  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.295496  417416 main.go:143] libmachine: Using SSH client type: native
	I1115 09:38:20.295872  417416 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I1115 09:38:20.295890  417416 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1115 09:38:20.406221  417416 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:38:20.406750  417416 main.go:143] libmachine: domain creation complete
	I1115 09:38:20.408205  417416 machine.go:94] provisionDockerMachine start ...
	I1115 09:38:20.410728  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.411154  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.411178  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.411340  417416 main.go:143] libmachine: Using SSH client type: native
	I1115 09:38:20.411539  417416 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I1115 09:38:20.411550  417416 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:38:20.523459  417416 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1115 09:38:20.523490  417416 buildroot.go:166] provisioning hostname "addons-965866"
	I1115 09:38:20.526392  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.526908  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.526938  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.527118  417416 main.go:143] libmachine: Using SSH client type: native
	I1115 09:38:20.527334  417416 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I1115 09:38:20.527354  417416 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-965866 && echo "addons-965866" | sudo tee /etc/hostname
	I1115 09:38:20.653606  417416 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-965866
	
	I1115 09:38:20.657004  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.657426  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.657457  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.657626  417416 main.go:143] libmachine: Using SSH client type: native
	I1115 09:38:20.657930  417416 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I1115 09:38:20.657954  417416 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-965866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-965866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-965866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:38:20.777644  417416 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:38:20.777709  417416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 09:38:20.777732  417416 buildroot.go:174] setting up certificates
	I1115 09:38:20.777748  417416 provision.go:84] configureAuth start
	I1115 09:38:20.780614  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.781092  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.781120  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.783446  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.783839  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.783865  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.784008  417416 provision.go:143] copyHostCerts
	I1115 09:38:20.784095  417416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 09:38:20.784240  417416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 09:38:20.784398  417416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 09:38:20.784552  417416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.addons-965866 san=[127.0.0.1 192.168.39.252 addons-965866 localhost minikube]
	I1115 09:38:20.905420  417416 provision.go:177] copyRemoteCerts
	I1115 09:38:20.905500  417416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:38:20.908121  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.908458  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:20.908483  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:20.908630  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:20.997297  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:38:21.027481  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:38:21.057247  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:38:21.086453  417416 provision.go:87] duration metric: took 308.690019ms to configureAuth
	I1115 09:38:21.086484  417416 buildroot.go:189] setting minikube options for container-runtime
	I1115 09:38:21.086742  417416 config.go:182] Loaded profile config "addons-965866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:21.089737  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.090251  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.090280  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.090490  417416 main.go:143] libmachine: Using SSH client type: native
	I1115 09:38:21.090788  417416 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I1115 09:38:21.090808  417416 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:38:21.345746  417416 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:38:21.345783  417416 machine.go:97] duration metric: took 937.557744ms to provisionDockerMachine
	I1115 09:38:21.345798  417416 client.go:176] duration metric: took 17.616264175s to LocalClient.Create
	I1115 09:38:21.345819  417416 start.go:167] duration metric: took 17.616317823s to libmachine.API.Create "addons-965866"
	I1115 09:38:21.345830  417416 start.go:293] postStartSetup for "addons-965866" (driver="kvm2")
	I1115 09:38:21.345845  417416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:38:21.346045  417416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:38:21.348961  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.349391  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.349416  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.349586  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:21.434185  417416 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:38:21.439360  417416 info.go:137] Remote host: Buildroot 2025.02
	I1115 09:38:21.439389  417416 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/addons for local assets ...
	I1115 09:38:21.439463  417416 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/files for local assets ...
	I1115 09:38:21.439484  417416 start.go:296] duration metric: took 93.645877ms for postStartSetup
	I1115 09:38:21.442907  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.443319  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.443344  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.443604  417416 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/config.json ...
	I1115 09:38:21.443825  417416 start.go:128] duration metric: took 17.715893249s to createHost
	I1115 09:38:21.446244  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.446693  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.446724  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.446932  417416 main.go:143] libmachine: Using SSH client type: native
	I1115 09:38:21.447162  417416 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I1115 09:38:21.447176  417416 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 09:38:21.555926  417416 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763199501.526659913
	
	I1115 09:38:21.555949  417416 fix.go:216] guest clock: 1763199501.526659913
	I1115 09:38:21.555957  417416 fix.go:229] Guest: 2025-11-15 09:38:21.526659913 +0000 UTC Remote: 2025-11-15 09:38:21.443844254 +0000 UTC m=+17.819418413 (delta=82.815659ms)
	I1115 09:38:21.555979  417416 fix.go:200] guest clock delta is within tolerance: 82.815659ms
	I1115 09:38:21.555991  417416 start.go:83] releasing machines lock for "addons-965866", held for 17.828177671s
	I1115 09:38:21.558803  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.559182  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.559202  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.559810  417416 ssh_runner.go:195] Run: cat /version.json
	I1115 09:38:21.559918  417416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:38:21.562818  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.563109  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.563157  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.563178  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.563319  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:21.563704  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:21.563738  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:21.563932  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:21.642795  417416 ssh_runner.go:195] Run: systemctl --version
	I1115 09:38:21.668184  417416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:38:21.824156  417416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:38:21.830859  417416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:38:21.830947  417416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:38:21.851308  417416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:38:21.851339  417416 start.go:496] detecting cgroup driver to use...
	I1115 09:38:21.851441  417416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:38:21.871646  417416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:38:21.889004  417416 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:38:21.889065  417416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:38:21.906437  417416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:38:21.923138  417416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:38:22.063875  417416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:38:22.275161  417416 docker.go:234] disabling docker service ...
	I1115 09:38:22.275245  417416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:38:22.296842  417416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:38:22.311307  417416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:38:22.462037  417416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:38:22.602778  417416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:38:22.619257  417416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:38:22.641199  417416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:38:22.641270  417416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.653649  417416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:38:22.653747  417416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.666090  417416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.678424  417416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.690463  417416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:38:22.703329  417416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.715355  417416 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.735566  417416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:38:22.747433  417416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:38:22.758586  417416 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 09:38:22.758682  417416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 09:38:22.780115  417416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:38:22.794271  417416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:38:22.935895  417416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:38:23.056015  417416 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:38:23.056118  417416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:38:23.061259  417416 start.go:564] Will wait 60s for crictl version
	I1115 09:38:23.061341  417416 ssh_runner.go:195] Run: which crictl
	I1115 09:38:23.065211  417416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 09:38:23.106032  417416 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 09:38:23.106191  417416 ssh_runner.go:195] Run: crio --version
	I1115 09:38:23.135470  417416 ssh_runner.go:195] Run: crio --version
	I1115 09:38:23.166751  417416 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1115 09:38:23.170736  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:23.171109  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:23.171132  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:23.171313  417416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1115 09:38:23.175856  417416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:38:23.191440  417416 kubeadm.go:884] updating cluster {Name:addons-965866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-965866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:38:23.191549  417416 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:38:23.191595  417416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:38:23.233313  417416 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 09:38:23.233388  417416 ssh_runner.go:195] Run: which lz4
	I1115 09:38:23.238109  417416 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1115 09:38:23.243120  417416 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1115 09:38:23.243151  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1115 09:38:24.550041  417416 crio.go:462] duration metric: took 1.311967894s to copy over tarball
	I1115 09:38:24.550156  417416 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1115 09:38:26.177212  417416 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.627016076s)
	I1115 09:38:26.177253  417416 crio.go:469] duration metric: took 1.627168352s to extract the tarball
	I1115 09:38:26.177261  417416 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1115 09:38:26.222988  417416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:38:26.267563  417416 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:38:26.267597  417416 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:38:26.267607  417416 kubeadm.go:935] updating node { 192.168.39.252 8443 v1.34.1 crio true true} ...
	I1115 09:38:26.267741  417416 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-965866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-965866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:38:26.267826  417416 ssh_runner.go:195] Run: crio config
	I1115 09:38:26.314102  417416 cni.go:84] Creating CNI manager for ""
	I1115 09:38:26.314131  417416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:38:26.314150  417416 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:38:26.314174  417416 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-965866 NodeName:addons-965866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:38:26.314297  417416 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-965866"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.252"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:38:26.314383  417416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:38:26.326919  417416 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:38:26.326994  417416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:38:26.339192  417416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1115 09:38:26.359926  417416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:38:26.382904  417416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1115 09:38:26.403979  417416 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I1115 09:38:26.407986  417416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:38:26.422732  417416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:38:26.571266  417416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:38:26.592776  417416 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866 for IP: 192.168.39.252
	I1115 09:38:26.592801  417416 certs.go:195] generating shared ca certs ...
	I1115 09:38:26.592824  417416 certs.go:227] acquiring lock for ca certs: {Name:mk02a14faa29b024d0296173a778127e8da9e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:26.593004  417416 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key
	I1115 09:38:26.821814  417416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt ...
	I1115 09:38:26.821847  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt: {Name:mk895422cb2fc1d4ce015d84f16ff6f5317542ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:26.822085  417416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key ...
	I1115 09:38:26.822107  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key: {Name:mka95e44aedfa36486af44291db878b9cdd3cdab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:26.822223  417416 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key
	I1115 09:38:27.053704  417416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.crt ...
	I1115 09:38:27.053742  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.crt: {Name:mk475342059d979187773cd734b5384b36ac0a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.053953  417416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key ...
	I1115 09:38:27.053971  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key: {Name:mkba4fd70d79c3fcaad09b3e9c323d444a4bb066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.054083  417416 certs.go:257] generating profile certs ...
	I1115 09:38:27.054180  417416 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.key
	I1115 09:38:27.054201  417416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt with IP's: []
	I1115 09:38:27.300242  417416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt ...
	I1115 09:38:27.300284  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: {Name:mka1a4ab93e0c61477fb81e7d0faa304b76e85a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.301246  417416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.key ...
	I1115 09:38:27.301266  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.key: {Name:mk4748665f46fc4ca6cb82865183e031bee5aa5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.301352  417416 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.key.e005d9c9
	I1115 09:38:27.301371  417416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.crt.e005d9c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.252]
	I1115 09:38:27.873067  417416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.crt.e005d9c9 ...
	I1115 09:38:27.873097  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.crt.e005d9c9: {Name:mk28f5c8df66d854f249d4ccb9e1f47df0843757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.873261  417416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.key.e005d9c9 ...
	I1115 09:38:27.873274  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.key.e005d9c9: {Name:mk8abf1244e0bec98558605510726eef24d1d893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.873343  417416 certs.go:382] copying /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.crt.e005d9c9 -> /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.crt
	I1115 09:38:27.873412  417416 certs.go:386] copying /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.key.e005d9c9 -> /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.key
	I1115 09:38:27.873461  417416 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.key
	I1115 09:38:27.873479  417416 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.crt with IP's: []
	I1115 09:38:27.955435  417416 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.crt ...
	I1115 09:38:27.955466  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.crt: {Name:mk213375c8eb33570115e2b4ed026ad76258975b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.955643  417416 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.key ...
	I1115 09:38:27.955656  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.key: {Name:mk566f6320d0c1db721a7a70117d8ffe68e9e20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:27.956495  417416 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:38:27.956533  417416 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:38:27.956554  417416 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:38:27.956578  417416 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem (1675 bytes)
	I1115 09:38:27.957222  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:38:27.993598  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:38:28.024269  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:38:28.054497  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:38:28.085799  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:38:28.116237  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 09:38:28.146303  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:38:28.178823  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:38:28.222318  417416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:38:28.252049  417416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:38:28.273411  417416 ssh_runner.go:195] Run: openssl version
	I1115 09:38:28.279891  417416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:38:28.292977  417416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:38:28.298226  417416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:38 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:38:28.298289  417416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:38:28.305966  417416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:38:28.318971  417416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:38:28.323646  417416 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:38:28.323729  417416 kubeadm.go:401] StartCluster: {Name:addons-965866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-965866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:38:28.323825  417416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:38:28.323924  417416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:38:28.362071  417416 cri.go:89] found id: ""
	I1115 09:38:28.362153  417416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:38:28.374250  417416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:38:28.386085  417416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:38:28.397691  417416 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:38:28.397750  417416 kubeadm.go:158] found existing configuration files:
	
	I1115 09:38:28.397805  417416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:38:28.409351  417416 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:38:28.409433  417416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:38:28.421502  417416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:38:28.432171  417416 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:38:28.432236  417416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:38:28.443816  417416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:38:28.454550  417416 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:38:28.454622  417416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:38:28.466534  417416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:38:28.477195  417416 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:38:28.477268  417416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:38:28.489209  417416 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1115 09:38:28.635707  417416 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:38:40.349211  417416 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:38:40.349289  417416 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:38:40.349384  417416 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:38:40.349504  417416 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:38:40.349611  417416 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:38:40.349733  417416 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:38:40.351317  417416 out.go:252]   - Generating certificates and keys ...
	I1115 09:38:40.351443  417416 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:38:40.351615  417416 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:38:40.351749  417416 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:38:40.351817  417416 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:38:40.351870  417416 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:38:40.351915  417416 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:38:40.351961  417416 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:38:40.352068  417416 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-965866 localhost] and IPs [192.168.39.252 127.0.0.1 ::1]
	I1115 09:38:40.352113  417416 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:38:40.352228  417416 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-965866 localhost] and IPs [192.168.39.252 127.0.0.1 ::1]
	I1115 09:38:40.352343  417416 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:38:40.352438  417416 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:38:40.352518  417416 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:38:40.352587  417416 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:38:40.352691  417416 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:38:40.352761  417416 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:38:40.352831  417416 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:38:40.352914  417416 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:38:40.352972  417416 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:38:40.353038  417416 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:38:40.353134  417416 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:38:40.354529  417416 out.go:252]   - Booting up control plane ...
	I1115 09:38:40.354633  417416 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:38:40.354738  417416 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:38:40.354824  417416 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:38:40.354921  417416 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:38:40.355005  417416 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:38:40.355094  417416 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:38:40.355170  417416 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:38:40.355243  417416 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:38:40.355385  417416 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:38:40.355468  417416 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:38:40.355572  417416 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.966436ms
	I1115 09:38:40.355733  417416 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:38:40.355844  417416 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.252:8443/livez
	I1115 09:38:40.355971  417416 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:38:40.356040  417416 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:38:40.356114  417416 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.587312956s
	I1115 09:38:40.356240  417416 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.646782092s
	I1115 09:38:40.356338  417416 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501693676s
	I1115 09:38:40.356461  417416 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:38:40.356616  417416 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:38:40.356711  417416 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:38:40.356903  417416 kubeadm.go:319] [mark-control-plane] Marking the node addons-965866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:38:40.356989  417416 kubeadm.go:319] [bootstrap-token] Using token: 4b2l2d.ep7hhd2n32x5fb0o
	I1115 09:38:40.358265  417416 out.go:252]   - Configuring RBAC rules ...
	I1115 09:38:40.358363  417416 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:38:40.358464  417416 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:38:40.358613  417416 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:38:40.358784  417416 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:38:40.358906  417416 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:38:40.359023  417416 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:38:40.359133  417416 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:38:40.359202  417416 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:38:40.359268  417416 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:38:40.359277  417416 kubeadm.go:319] 
	I1115 09:38:40.359361  417416 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:38:40.359371  417416 kubeadm.go:319] 
	I1115 09:38:40.359490  417416 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:38:40.359507  417416 kubeadm.go:319] 
	I1115 09:38:40.359528  417416 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:38:40.359583  417416 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:38:40.359626  417416 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:38:40.359632  417416 kubeadm.go:319] 
	I1115 09:38:40.359691  417416 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:38:40.359697  417416 kubeadm.go:319] 
	I1115 09:38:40.359754  417416 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:38:40.359762  417416 kubeadm.go:319] 
	I1115 09:38:40.359806  417416 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:38:40.359873  417416 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:38:40.359944  417416 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:38:40.359956  417416 kubeadm.go:319] 
	I1115 09:38:40.360041  417416 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:38:40.360126  417416 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:38:40.360133  417416 kubeadm.go:319] 
	I1115 09:38:40.360242  417416 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4b2l2d.ep7hhd2n32x5fb0o \
	I1115 09:38:40.360336  417416 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ae816884f4fa051a1910ec2bdb70ed8a9b3a0e7d695314d04512616fbdb79e2e \
	I1115 09:38:40.360356  417416 kubeadm.go:319] 	--control-plane 
	I1115 09:38:40.360362  417416 kubeadm.go:319] 
	I1115 09:38:40.360433  417416 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:38:40.360439  417416 kubeadm.go:319] 
	I1115 09:38:40.360500  417416 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4b2l2d.ep7hhd2n32x5fb0o \
	I1115 09:38:40.360594  417416 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ae816884f4fa051a1910ec2bdb70ed8a9b3a0e7d695314d04512616fbdb79e2e 
	I1115 09:38:40.360605  417416 cni.go:84] Creating CNI manager for ""
	I1115 09:38:40.360612  417416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:38:40.361900  417416 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 09:38:40.362826  417416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 09:38:40.376427  417416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 09:38:40.398482  417416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:38:40.398545  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:40.398648  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-965866 minikube.k8s.io/updated_at=2025_11_15T09_38_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=addons-965866 minikube.k8s.io/primary=true
	I1115 09:38:40.557885  417416 ops.go:34] apiserver oom_adj: -16
	I1115 09:38:40.558074  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:41.058360  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:41.558822  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:42.059007  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:42.558404  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:43.058546  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:43.558810  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:44.058732  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:44.558181  417416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:38:44.634541  417416 kubeadm.go:1114] duration metric: took 4.236057022s to wait for elevateKubeSystemPrivileges
	I1115 09:38:44.634586  417416 kubeadm.go:403] duration metric: took 16.310855407s to StartCluster
	I1115 09:38:44.634613  417416 settings.go:142] acquiring lock: {Name:mk51bbf0fd9b357d299ebd118e728450a954032c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:44.634789  417416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 09:38:44.635317  417416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/kubeconfig: {Name:mk18351328d03342e92a234b66dd855b67ad51ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:38:44.635562  417416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:38:44.635585  417416 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:38:44.635670  417416 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:38:44.635798  417416 addons.go:70] Setting yakd=true in profile "addons-965866"
	I1115 09:38:44.635826  417416 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-965866"
	I1115 09:38:44.635832  417416 config.go:182] Loaded profile config "addons-965866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:44.635844  417416 addons.go:70] Setting registry=true in profile "addons-965866"
	I1115 09:38:44.635867  417416 addons.go:239] Setting addon yakd=true in "addons-965866"
	I1115 09:38:44.635876  417416 addons.go:239] Setting addon registry=true in "addons-965866"
	I1115 09:38:44.635889  417416 addons.go:70] Setting metrics-server=true in profile "addons-965866"
	I1115 09:38:44.635902  417416 addons.go:239] Setting addon metrics-server=true in "addons-965866"
	I1115 09:38:44.635909  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.635901  417416 addons.go:70] Setting inspektor-gadget=true in profile "addons-965866"
	I1115 09:38:44.635928  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.635930  417416 addons.go:70] Setting default-storageclass=true in profile "addons-965866"
	I1115 09:38:44.635945  417416 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-965866"
	I1115 09:38:44.635938  417416 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-965866"
	I1115 09:38:44.635958  417416 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-965866"
	I1115 09:38:44.635993  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.636182  417416 addons.go:239] Setting addon inspektor-gadget=true in "addons-965866"
	I1115 09:38:44.636210  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.636265  417416 addons.go:70] Setting ingress=true in profile "addons-965866"
	I1115 09:38:44.636287  417416 addons.go:239] Setting addon ingress=true in "addons-965866"
	I1115 09:38:44.636330  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.636340  417416 addons.go:70] Setting ingress-dns=true in profile "addons-965866"
	I1115 09:38:44.636357  417416 addons.go:239] Setting addon ingress-dns=true in "addons-965866"
	I1115 09:38:44.636388  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.635916  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.636958  417416 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-965866"
	I1115 09:38:44.636991  417416 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-965866"
	I1115 09:38:44.635804  417416 addons.go:70] Setting gcp-auth=true in profile "addons-965866"
	I1115 09:38:44.637057  417416 mustload.go:66] Loading cluster: addons-965866
	I1115 09:38:44.637217  417416 addons.go:70] Setting registry-creds=true in profile "addons-965866"
	I1115 09:38:44.637239  417416 addons.go:239] Setting addon registry-creds=true in "addons-965866"
	I1115 09:38:44.637251  417416 config.go:182] Loaded profile config "addons-965866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:44.637261  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.635814  417416 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-965866"
	I1115 09:38:44.637478  417416 addons.go:70] Setting storage-provisioner=true in profile "addons-965866"
	I1115 09:38:44.637492  417416 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-965866"
	I1115 09:38:44.637498  417416 addons.go:239] Setting addon storage-provisioner=true in "addons-965866"
	I1115 09:38:44.637523  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.637529  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.637679  417416 addons.go:70] Setting volcano=true in profile "addons-965866"
	I1115 09:38:44.637698  417416 addons.go:239] Setting addon volcano=true in "addons-965866"
	I1115 09:38:44.637721  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.635913  417416 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-965866"
	I1115 09:38:44.637764  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.637976  417416 addons.go:70] Setting volumesnapshots=true in profile "addons-965866"
	I1115 09:38:44.637995  417416 addons.go:239] Setting addon volumesnapshots=true in "addons-965866"
	I1115 09:38:44.635821  417416 addons.go:70] Setting cloud-spanner=true in profile "addons-965866"
	I1115 09:38:44.638015  417416 addons.go:239] Setting addon cloud-spanner=true in "addons-965866"
	I1115 09:38:44.638018  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.638039  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.638339  417416 out.go:179] * Verifying Kubernetes components...
	I1115 09:38:44.639834  417416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:38:44.644038  417416 addons.go:239] Setting addon default-storageclass=true in "addons-965866"
	I1115 09:38:44.644070  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.644607  417416 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:38:44.644685  417416 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:38:44.644711  417416 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:38:44.645622  417416 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:38:44.645624  417416 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:38:44.646039  417416 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:38:44.646514  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.646833  417416 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:38:44.646920  417416 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:38:44.646942  417416 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:38:44.646952  417416 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:38:44.646964  417416 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:38:44.646973  417416 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:38:44.646964  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:38:44.647072  417416 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-965866"
	I1115 09:38:44.647107  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:44.647653  417416 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:38:44.647681  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1115 09:38:44.649064  417416 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:38:44.649119  417416 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:38:44.649129  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:38:44.649898  417416 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:38:44.649899  417416 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:38:44.649919  417416 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:38:44.649941  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:38:44.649960  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:38:44.649937  417416 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:38:44.650423  417416 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:38:44.651405  417416 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:38:44.651410  417416 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:38:44.651429  417416 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:38:44.651782  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:38:44.651429  417416 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:38:44.651889  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:38:44.650701  417416 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 09:38:44.652168  417416 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:38:44.652185  417416 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:38:44.652193  417416 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:38:44.652205  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:38:44.653008  417416 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:38:44.653026  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:38:44.653751  417416 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:38:44.653778  417416 out.go:179]   - Using image docker.io/busybox:stable
	I1115 09:38:44.653783  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:38:44.653802  417416 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:38:44.653817  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:38:44.654881  417416 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:38:44.654896  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:38:44.655710  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:38:44.655716  417416 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:38:44.656442  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.657623  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.658222  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.658270  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.659223  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.659254  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.659266  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.659284  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.659295  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.660586  417416 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:38:44.660613  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:38:44.660643  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.661213  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.661518  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:38:44.661918  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.662024  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.662083  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.662118  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.662817  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.662912  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.662826  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.662948  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.663644  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.663711  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.663969  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:38:44.664722  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.664769  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.664808  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.664813  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.664886  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.665348  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.665611  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.665756  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666074  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666191  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.666206  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.666227  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666224  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666317  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.666341  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666438  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666504  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:38:44.666566  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.666599  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.666703  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.666703  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.667090  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.667114  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.667124  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.667310  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.667374  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.667435  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.667740  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.667753  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.667778  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.667962  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.668098  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.668834  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.669085  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:38:44.669266  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.669290  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.669449  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:44.671367  417416 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:38:44.672383  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:38:44.672405  417416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:38:44.675318  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.675766  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:44.675794  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:44.675942  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:45.084719  417416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:38:45.084817  417416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:38:45.364309  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:38:45.381949  417416 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:38:45.381975  417416 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:38:45.425050  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:38:45.437766  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:38:45.441465  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:38:45.442695  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:38:45.442723  417416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:38:45.443585  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:38:45.448251  417416 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:38:45.448278  417416 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:38:45.519429  417416 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:38:45.519460  417416 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:38:45.555446  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:38:45.561315  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:38:45.564072  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:38:45.574250  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:38:45.584093  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:38:45.591557  417416 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:38:45.591595  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:38:45.706749  417416 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:38:45.706786  417416 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:38:45.753873  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:38:45.753901  417416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:38:45.782545  417416 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:38:45.782575  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:38:45.821037  417416 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:38:45.821079  417416 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:38:45.843400  417416 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:38:45.843428  417416 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:38:45.980880  417416 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:38:45.980910  417416 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:38:45.981232  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:38:45.981248  417416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:38:46.038600  417416 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:38:46.038637  417416 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:38:46.121348  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:38:46.148850  417416 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:38:46.148895  417416 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:38:46.194971  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:38:46.195004  417416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:38:46.281643  417416 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:38:46.281690  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:38:46.323405  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:38:46.725812  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:38:46.725862  417416 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:38:46.732903  417416 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:38:46.732942  417416 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:38:46.848456  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:38:47.270782  417416 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:38:47.270831  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:38:47.450264  417416 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:38:47.450288  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:38:47.696403  417416 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:38:47.696434  417416 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:38:47.830273  417416 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.745410244s)
	I1115 09:38:47.830292  417416 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.745543192s)
	I1115 09:38:47.830309  417416 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1115 09:38:47.830394  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.466045953s)
	I1115 09:38:47.830490  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.405404068s)
	I1115 09:38:47.831253  417416 node_ready.go:35] waiting up to 6m0s for node "addons-965866" to be "Ready" ...
	I1115 09:38:47.892508  417416 node_ready.go:49] node "addons-965866" is "Ready"
	I1115 09:38:47.892543  417416 node_ready.go:38] duration metric: took 61.264787ms for node "addons-965866" to be "Ready" ...
	I1115 09:38:47.892560  417416 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:38:47.892643  417416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:38:47.995311  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:38:48.300897  417416 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:38:48.300941  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:38:48.354296  417416 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-965866" context rescaled to 1 replicas
	I1115 09:38:48.869590  417416 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:38:48.869625  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:38:49.213262  417416 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:38:49.213306  417416 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:38:49.589704  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:38:50.959125  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.521308886s)
	I1115 09:38:50.959214  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.517710152s)
	I1115 09:38:51.202787  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.75916856s)
	I1115 09:38:51.202917  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.647426371s)
	I1115 09:38:51.202964  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.641612719s)
	W1115 09:38:51.483241  417416 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1115 09:38:51.818597  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.254481472s)
	I1115 09:38:52.164448  417416 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:38:52.167429  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:52.167867  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:52.167901  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:52.168085  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:52.439273  417416 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:38:52.530216  417416 addons.go:239] Setting addon gcp-auth=true in "addons-965866"
	I1115 09:38:52.530283  417416 host.go:66] Checking if "addons-965866" exists ...
	I1115 09:38:52.534487  417416 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:38:52.537198  417416 main.go:143] libmachine: domain addons-965866 has defined MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:52.537704  417416 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:72:38", ip: ""} in network mk-addons-965866: {Iface:virbr1 ExpiryTime:2025-11-15 10:38:19 +0000 UTC Type:0 Mac:52:54:00:ba:72:38 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-965866 Clientid:01:52:54:00:ba:72:38}
	I1115 09:38:52.537735  417416 main.go:143] libmachine: domain addons-965866 has defined IP address 192.168.39.252 and MAC address 52:54:00:ba:72:38 in network mk-addons-965866
	I1115 09:38:52.537959  417416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/addons-965866/id_rsa Username:docker}
	I1115 09:38:53.273342  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.699046799s)
	I1115 09:38:53.273383  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.689246915s)
	I1115 09:38:53.273395  417416 addons.go:480] Verifying addon ingress=true in "addons-965866"
	I1115 09:38:53.273434  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.152032129s)
	I1115 09:38:53.273467  417416 addons.go:480] Verifying addon registry=true in "addons-965866"
	I1115 09:38:53.273567  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.950122736s)
	I1115 09:38:53.273589  417416 addons.go:480] Verifying addon metrics-server=true in "addons-965866"
	I1115 09:38:53.273640  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.425155572s)
	I1115 09:38:53.273701  417416 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.381030974s)
	I1115 09:38:53.273774  417416 api_server.go:72] duration metric: took 8.638149345s to wait for apiserver process to appear ...
	I1115 09:38:53.273791  417416 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:38:53.273818  417416 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I1115 09:38:53.275260  417416 out.go:179] * Verifying ingress addon...
	I1115 09:38:53.275260  417416 out.go:179] * Verifying registry addon...
	I1115 09:38:53.276091  417416 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-965866 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:38:53.277759  417416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:38:53.277997  417416 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:38:53.314527  417416 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I1115 09:38:53.317654  417416 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:38:53.317695  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:53.317712  417416 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:38:53.317721  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:53.318585  417416 api_server.go:141] control plane version: v1.34.1
	I1115 09:38:53.318619  417416 api_server.go:131] duration metric: took 44.819557ms to wait for apiserver health ...
	I1115 09:38:53.318633  417416 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:38:53.398821  417416 system_pods.go:59] 15 kube-system pods found
	I1115 09:38:53.398880  417416 system_pods.go:61] "amd-gpu-device-plugin-cnlhx" [36b3cde7-2c30-4f04-8df2-f40949aafb70] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:38:53.398892  417416 system_pods.go:61] "coredns-66bc5c9577-24wrk" [fd25c45c-117e-4e43-9b51-356a8440a9d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:38:53.398907  417416 system_pods.go:61] "coredns-66bc5c9577-4mrv8" [186b52ac-60b4-4fd9-bdda-33d5c8fd792a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:38:53.398914  417416 system_pods.go:61] "etcd-addons-965866" [71560607-cabe-42ea-84de-e4ef8b38ee8d] Running
	I1115 09:38:53.398921  417416 system_pods.go:61] "kube-apiserver-addons-965866" [24c78ffc-cfd6-4cd4-aa05-de5896e54ea3] Running
	I1115 09:38:53.398926  417416 system_pods.go:61] "kube-controller-manager-addons-965866" [005dc276-ebc4-4913-9dd1-1e8eb2679f8d] Running
	I1115 09:38:53.398934  417416 system_pods.go:61] "kube-ingress-dns-minikube" [783ea184-7bfb-4c8a-8adc-1c0eb45bfbec] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:38:53.398939  417416 system_pods.go:61] "kube-proxy-kft47" [3aaf88a0-bc54-4e40-84a2-8d07c947e115] Running
	I1115 09:38:53.398945  417416 system_pods.go:61] "kube-scheduler-addons-965866" [1bdcd14a-6ca6-45c0-ade3-b62c79e4e3ed] Running
	I1115 09:38:53.398952  417416 system_pods.go:61] "metrics-server-85b7d694d7-g7cwg" [8a8c0a7b-40cd-4f4f-9186-829a7d7c3c14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:38:53.398964  417416 system_pods.go:61] "nvidia-device-plugin-daemonset-xk524" [4e750b0a-f108-442f-b1dc-a91663709ffd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:38:53.398973  417416 system_pods.go:61] "registry-6b586f9694-gvxk5" [750caabe-24b3-415a-988c-05ee8b751f39] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:38:53.398985  417416 system_pods.go:61] "registry-creds-764b6fb674-sxqrq" [b4632efc-9e47-4d7b-acae-dd6868c427dd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:38:53.398994  417416 system_pods.go:61] "registry-proxy-85vq7" [2ce95f92-5041-42b4-94d4-70973bc1dea8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:38:53.399003  417416 system_pods.go:61] "storage-provisioner" [62f417ad-f47d-48cc-92d6-66db53b17151] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:38:53.399016  417416 system_pods.go:74] duration metric: took 80.374108ms to wait for pod list to return data ...
	I1115 09:38:53.399032  417416 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:38:53.477017  417416 default_sa.go:45] found service account: "default"
	I1115 09:38:53.477049  417416 default_sa.go:55] duration metric: took 78.00861ms for default service account to be created ...
	I1115 09:38:53.477064  417416 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:38:53.509641  417416 system_pods.go:86] 15 kube-system pods found
	I1115 09:38:53.509700  417416 system_pods.go:89] "amd-gpu-device-plugin-cnlhx" [36b3cde7-2c30-4f04-8df2-f40949aafb70] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:38:53.509711  417416 system_pods.go:89] "coredns-66bc5c9577-24wrk" [fd25c45c-117e-4e43-9b51-356a8440a9d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:38:53.509727  417416 system_pods.go:89] "coredns-66bc5c9577-4mrv8" [186b52ac-60b4-4fd9-bdda-33d5c8fd792a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:38:53.509735  417416 system_pods.go:89] "etcd-addons-965866" [71560607-cabe-42ea-84de-e4ef8b38ee8d] Running
	I1115 09:38:53.509742  417416 system_pods.go:89] "kube-apiserver-addons-965866" [24c78ffc-cfd6-4cd4-aa05-de5896e54ea3] Running
	I1115 09:38:53.509749  417416 system_pods.go:89] "kube-controller-manager-addons-965866" [005dc276-ebc4-4913-9dd1-1e8eb2679f8d] Running
	I1115 09:38:53.509760  417416 system_pods.go:89] "kube-ingress-dns-minikube" [783ea184-7bfb-4c8a-8adc-1c0eb45bfbec] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:38:53.509769  417416 system_pods.go:89] "kube-proxy-kft47" [3aaf88a0-bc54-4e40-84a2-8d07c947e115] Running
	I1115 09:38:53.509773  417416 system_pods.go:89] "kube-scheduler-addons-965866" [1bdcd14a-6ca6-45c0-ade3-b62c79e4e3ed] Running
	I1115 09:38:53.509781  417416 system_pods.go:89] "metrics-server-85b7d694d7-g7cwg" [8a8c0a7b-40cd-4f4f-9186-829a7d7c3c14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:38:53.509789  417416 system_pods.go:89] "nvidia-device-plugin-daemonset-xk524" [4e750b0a-f108-442f-b1dc-a91663709ffd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:38:53.509796  417416 system_pods.go:89] "registry-6b586f9694-gvxk5" [750caabe-24b3-415a-988c-05ee8b751f39] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:38:53.509803  417416 system_pods.go:89] "registry-creds-764b6fb674-sxqrq" [b4632efc-9e47-4d7b-acae-dd6868c427dd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:38:53.509816  417416 system_pods.go:89] "registry-proxy-85vq7" [2ce95f92-5041-42b4-94d4-70973bc1dea8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:38:53.509825  417416 system_pods.go:89] "storage-provisioner" [62f417ad-f47d-48cc-92d6-66db53b17151] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:38:53.509857  417416 system_pods.go:126] duration metric: took 32.784806ms to wait for k8s-apps to be running ...
	I1115 09:38:53.509873  417416 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:38:53.509929  417416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:38:53.783850  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.788490129s)
	W1115 09:38:53.783909  417416 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:38:53.783937  417416 retry.go:31] will retry after 142.722552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:38:53.809480  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:53.809789  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:53.926925  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:38:54.304896  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:54.305010  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:54.436242  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.846463937s)
	I1115 09:38:54.436292  417416 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-965866"
	I1115 09:38:54.436323  417416 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.901795595s)
	I1115 09:38:54.436388  417416 system_svc.go:56] duration metric: took 926.503165ms WaitForService to wait for kubelet
	I1115 09:38:54.436426  417416 kubeadm.go:587] duration metric: took 9.800807643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:38:54.436510  417416 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:38:54.438038  417416 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:38:54.438038  417416 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:38:54.439697  417416 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:38:54.440459  417416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:38:54.440975  417416 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:38:54.440998  417416 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:38:54.481470  417416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 09:38:54.481517  417416 node_conditions.go:123] node cpu capacity is 2
	I1115 09:38:54.481536  417416 node_conditions.go:105] duration metric: took 45.019291ms to run NodePressure ...
	I1115 09:38:54.481554  417416 start.go:242] waiting for startup goroutines ...
	I1115 09:38:54.482160  417416 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:38:54.482188  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:54.599490  417416 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:38:54.599522  417416 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:38:54.738297  417416 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:38:54.738322  417416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:38:54.790783  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:54.791118  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:54.831745  417416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:38:54.946374  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:55.288020  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:55.288590  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:55.445074  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:55.782506  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:55.783909  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:55.946969  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:56.246727  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.319748252s)
	I1115 09:38:56.332336  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:56.339336  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:56.443615  417416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.611823107s)
	I1115 09:38:56.444762  417416 addons.go:480] Verifying addon gcp-auth=true in "addons-965866"
	I1115 09:38:56.446293  417416 out.go:179] * Verifying gcp-auth addon...
	I1115 09:38:56.448053  417416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:38:56.485134  417416 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:38:56.485163  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:56.485390  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:56.786190  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:56.787785  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:56.948461  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:56.959568  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:57.285369  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:57.286120  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:57.448844  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:57.453061  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:57.792168  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:57.793234  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:57.950989  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:57.954023  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:58.286126  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:58.286356  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:58.445295  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:58.451705  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:58.785060  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:58.788088  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:58.944365  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:58.951276  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:59.286368  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:59.286477  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:59.444128  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:59.452189  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:38:59.783583  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:38:59.785206  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:38:59.946737  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:38:59.953095  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:00.287922  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:00.291159  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:00.445691  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:00.451750  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:00.790276  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:00.790467  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:00.950995  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:01.051390  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:01.282889  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:01.283119  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:01.444899  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:01.453299  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:01.782288  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:01.782424  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:01.943873  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:01.951510  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:02.282619  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:02.282712  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:02.444389  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:02.459530  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:02.784054  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:02.784924  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:02.944530  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:02.951903  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:03.283749  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:03.283766  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:03.444441  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:03.456614  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:03.782074  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:03.783514  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:03.945171  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:03.954521  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:04.284099  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:04.284200  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:04.444386  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:04.454250  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:04.782876  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:04.783889  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:04.947245  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:04.953077  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:05.281631  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:05.281788  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:05.445018  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:05.452619  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:05.781588  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:05.782196  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:05.944961  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:05.951209  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:06.282299  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:06.282875  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:06.444534  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:06.450939  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:06.782773  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:06.782952  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:06.945211  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:06.951805  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:07.282192  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:07.282409  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:07.444333  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:07.450782  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:07.781480  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:07.781584  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:07.944752  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:07.951165  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:08.282529  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:08.282771  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:08.448400  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:08.452155  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:08.784774  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:08.784857  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:08.944649  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:08.952370  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:09.285879  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:09.286761  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:09.444303  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:09.453150  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:09.788074  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:09.789783  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:09.945130  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:09.951610  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:10.282963  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:10.283098  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:10.445114  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:10.451955  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:10.781608  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:10.783295  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:10.946009  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:10.951786  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:11.281741  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:11.282084  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:11.448303  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:11.451537  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:11.783573  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:11.783713  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:11.947858  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:11.953308  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:12.283792  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:12.285302  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:12.449700  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:12.455084  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:12.787476  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:12.787937  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:13.181623  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:13.190472  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:13.285972  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:13.286685  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:13.445277  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:13.452040  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:13.781281  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:13.781922  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:13.945342  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:13.952120  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:14.283641  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:14.283945  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:14.445230  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:14.451122  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:14.784489  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:14.784762  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:14.944943  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:14.951422  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:15.401006  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:15.403504  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:15.445517  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:15.451992  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:15.782991  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:15.783867  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:15.951063  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:15.955782  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:16.281846  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:16.283602  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:16.445165  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:16.452098  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:16.785382  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:16.788108  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:16.945136  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:16.952693  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:17.283105  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:17.285156  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:17.617165  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:17.619025  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:17.782581  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:17.783052  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:17.948484  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:17.952792  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:18.283860  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:18.284179  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:18.445188  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:18.456199  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:18.784293  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:18.784763  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:18.945786  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:18.951090  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:19.282309  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:19.282632  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:19.453307  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:19.456763  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:19.786162  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:19.788560  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:19.944015  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:19.951793  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:20.281730  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:20.281767  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:20.446324  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:20.453321  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:20.783325  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:20.783922  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:20.947526  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:20.952147  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:21.284497  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:21.289073  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:21.444096  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:21.456889  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:21.781843  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:21.781980  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:21.944585  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:21.953396  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:22.282548  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:22.282848  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:22.444818  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:22.452021  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:22.782618  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:22.782710  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:22.944871  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:22.952472  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:23.282440  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:39:23.283832  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:23.444142  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:23.451808  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:23.782018  417416 kapi.go:107] duration metric: took 30.504251334s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:39:23.783560  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:23.944193  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:23.951596  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:24.284607  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:24.444041  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:24.451244  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:24.783446  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:24.944706  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:24.951212  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:25.283717  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:25.444274  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:25.452028  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:25.784093  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:25.945107  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:25.952430  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:26.282959  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:26.444573  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:26.450546  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:26.783986  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:26.945317  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:26.952085  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:27.281573  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:27.446326  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:27.451064  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:27.782407  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:27.944985  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:27.952497  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:28.282452  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:28.453970  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:28.460346  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:28.785055  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:28.946297  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:28.952415  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:29.286330  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:29.448129  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:29.454744  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:29.792558  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:29.950140  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:29.953158  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:30.283074  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:30.447202  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:30.456342  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:30.783281  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:30.945071  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:30.951862  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:31.281360  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:31.452553  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:31.454532  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:31.789593  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:32.007281  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:32.007908  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:32.282201  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:32.448291  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:32.457164  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:32.783961  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:32.946282  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:32.952386  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:33.282220  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:33.453087  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:33.454903  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:33.783120  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:33.944930  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:33.951226  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:34.282496  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:34.445867  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:34.455030  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:34.862357  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:34.944878  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:34.951723  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:35.284284  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:35.444495  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:35.452368  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:35.785128  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:35.946115  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:35.952354  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:36.286936  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:36.445459  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:36.452439  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:36.783938  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:36.947812  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:36.954737  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:37.285551  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:37.444740  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:37.452599  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:37.784432  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:37.954938  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:37.956330  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:38.283379  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:38.445522  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:38.452847  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:38.782319  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:38.944827  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:38.951558  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:39.283957  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:39.445440  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:39.458289  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:39.782941  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:39.946352  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:39.951766  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:40.284304  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:40.444789  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:40.454767  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:40.786969  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:40.946363  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:40.951350  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:41.282742  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:41.444081  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:41.453626  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:41.782855  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:41.946773  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:41.953075  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:42.283269  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:42.445240  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:42.455610  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:42.782195  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:42.944566  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:42.951325  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:43.282085  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:43.447769  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:43.455954  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:43.782124  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:43.944998  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:43.953398  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:44.289194  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:44.447282  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:44.453140  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:44.784506  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:44.948037  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:44.953177  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:45.282417  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:45.446045  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:45.454759  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:45.786806  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:45.944455  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:45.953306  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:46.283806  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:46.512255  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:46.513230  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:46.787219  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:46.946327  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:46.950674  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:47.282955  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:47.448010  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:47.455294  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:47.782135  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:47.949354  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:47.960417  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:48.283914  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:48.444335  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:48.452087  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:48.789063  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:48.947191  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:48.951413  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:49.285766  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:49.444153  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:49.454136  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:49.783188  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:49.945455  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:49.950616  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:50.281827  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:50.451461  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:50.719604  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:50.785418  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:50.946207  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:50.952206  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:51.282299  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:51.444980  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:51.453758  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:51.783621  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:51.945852  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:51.951265  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:52.283387  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:52.448007  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:52.457308  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:52.783402  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:52.945312  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:52.951442  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:53.285045  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:53.444197  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:53.458250  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:53.785561  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:53.944292  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:39:53.953576  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:54.282383  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:54.445472  417416 kapi.go:107] duration metric: took 1m0.005007145s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:39:54.453418  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:54.783351  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:54.951294  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:55.283033  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:55.453983  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:55.782214  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:55.951993  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:56.281616  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:56.452220  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:56.782263  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:56.952019  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:57.282096  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:57.453337  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:57.783040  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:57.953303  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:58.284807  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:58.458438  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:58.790889  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:58.954009  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:59.284969  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:59.458471  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:39:59.787350  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:39:59.951484  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:00.286014  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:00.454363  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:00.784332  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:00.954736  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:01.282356  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:01.455278  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:01.784362  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:01.952607  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:02.284655  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:02.455822  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:02.781286  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:02.956380  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:03.282288  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:03.452328  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:04.082122  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:04.093397  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:04.283535  417416 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:40:04.464766  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:04.782493  417416 kapi.go:107] duration metric: took 1m11.50449197s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:40:04.952269  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:05.454612  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:06.006909  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:06.453434  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:06.956696  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:07.463141  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:07.953091  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:08.455942  417416 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:40:08.952792  417416 kapi.go:107] duration metric: took 1m12.504732891s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:40:08.955108  417416 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-965866 cluster.
	I1115 09:40:08.956897  417416 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:40:08.958431  417416 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:40:08.960245  417416 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1115 09:40:08.961542  417416 addons.go:515] duration metric: took 1m24.325880456s for enable addons: enabled=[registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget cloud-spanner metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1115 09:40:08.961592  417416 start.go:247] waiting for cluster config update ...
	I1115 09:40:08.961620  417416 start.go:256] writing updated cluster config ...
	I1115 09:40:08.961945  417416 ssh_runner.go:195] Run: rm -f paused
	I1115 09:40:08.968374  417416 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:40:08.972678  417416 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24wrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:08.978190  417416 pod_ready.go:94] pod "coredns-66bc5c9577-24wrk" is "Ready"
	I1115 09:40:08.978216  417416 pod_ready.go:86] duration metric: took 5.514004ms for pod "coredns-66bc5c9577-24wrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:08.981211  417416 pod_ready.go:83] waiting for pod "etcd-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:08.986103  417416 pod_ready.go:94] pod "etcd-addons-965866" is "Ready"
	I1115 09:40:08.986128  417416 pod_ready.go:86] duration metric: took 4.896757ms for pod "etcd-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:08.988572  417416 pod_ready.go:83] waiting for pod "kube-apiserver-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:08.994440  417416 pod_ready.go:94] pod "kube-apiserver-addons-965866" is "Ready"
	I1115 09:40:08.994463  417416 pod_ready.go:86] duration metric: took 5.867728ms for pod "kube-apiserver-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:08.997072  417416 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:09.373615  417416 pod_ready.go:94] pod "kube-controller-manager-addons-965866" is "Ready"
	I1115 09:40:09.373686  417416 pod_ready.go:86] duration metric: took 376.594163ms for pod "kube-controller-manager-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:09.574310  417416 pod_ready.go:83] waiting for pod "kube-proxy-kft47" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:09.972551  417416 pod_ready.go:94] pod "kube-proxy-kft47" is "Ready"
	I1115 09:40:09.972601  417416 pod_ready.go:86] duration metric: took 398.242993ms for pod "kube-proxy-kft47" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:10.173786  417416 pod_ready.go:83] waiting for pod "kube-scheduler-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:10.573393  417416 pod_ready.go:94] pod "kube-scheduler-addons-965866" is "Ready"
	I1115 09:40:10.573436  417416 pod_ready.go:86] duration metric: took 399.610643ms for pod "kube-scheduler-addons-965866" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:40:10.573455  417416 pod_ready.go:40] duration metric: took 1.605037628s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:40:10.624535  417416 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:40:10.626499  417416 out.go:179] * Done! kubectl is now configured to use "addons-965866" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.938044847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e0873d4-c05f-4cdf-8199-625ae5f94911 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.938120135Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e0873d4-c05f-4cdf-8199-625ae5f94911 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.939699031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4229146a-6e1f-43fc-adcd-7a57dd1df539 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.941150866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763199805941125603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4229146a-6e1f-43fc-adcd-7a57dd1df539 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.941875508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a74201c4-6a86-482e-93f3-b4e535e79137 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.941948602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a74201c4-6a86-482e-93f3-b4e535e79137 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.942547015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5927c374c4207196c2f87743b1b955a4b112cdcb086e1e034bee9bcb87ecaa3,PodSandboxId:d4b62459a033e03a0b2de9c327eca555414af6154e589a660de6465ca56f260f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763199664547690048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55acc31d-0a00-4815-9a94-f5347b56d0a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b9b159f68e209cb5d2730358f380af461510d79a821552df42bbe4b24d2adb,PodSandboxId:e9df416c7aff8d8e07d0a3ac5cfdfcc5c8fb62e708bf861731657dc692c9e2e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763199614991302905,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70a0ad0e-2065-49de-b086-eb86bff49a67,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4ad706986c3ecb976d14f60faad88913c52da94a8492968a7e9cfd9db0d401,PodSandboxId:bb502da9d2d1c80c845ef368137a19953b1336716767a04a4ffed86e12da3a21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763199604362384802,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-5bv6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2fc45e3-9f27-4af5-a57a-a8b8953e5472,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0d2fed49c049d69333f8f168280baa6c6720a2f568c6dc502a692756d0ff4735,PodSandboxId:bbe25b4d20ddb117240e93e70a77528862a33542c04b725de20208ea9480f0ca,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763199590736374277,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r96qz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4a782a5-344b-4aef-b4ba-bc6793d3993a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7877aee7ba93e1dd18d4aa7ba2f2f0ab72571993f70408f844cad93f2a2950,PodSandboxId:c99d949305e60e5c8758df4987a5197221f120ed3b8d1ac8e8d5703c788f1e85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763199575108646035,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xbjfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f47453f-53fa-40cf-a99f-292f867c4782,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0fa0866e733f75262f639cd9157b6fc8f6d3454e7e9c236c99a7fe3d1507408,PodSandboxId:7cf8a904fe28b431eef8bbfe0a396f9a4e0c13b8b8e07d94772ef8cf60d2514a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763199558803082933,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783ea184-7bfb-4c8a-8adc-1c0eb45bfbec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2cfeca18bee21e77d247bd8e702efa0d4e8bb4c6c93a359515d3678a42c1a4c,PodSandboxId:f9ba2596ad5a96455860ce38d743743070a282abb984f15aba242f1d031d49d9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763199533825056836,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cnlhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b3cde7-2c30-4f04-8df2-f40949aafb70,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee47b987d8e9b6511ea1b1d3f9d4cfa75832cba00924ddb4641faee9c4d2c2d,PodSandboxId:d1a4dcf2bad481e379e565219c948744cc87eaf44707f24180131023114c7e05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763199533458587805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f417ad-f47d-48cc-92d6-66db53b17151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a773a425284817ff4dd8291974f8d399eb43a71a493557c10b9576746be65c22,PodSandboxId:38a50791c13447963e97e0699aed101ac982109ff627b4b9220010384441cbc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1
fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763199527853015496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kft47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaf88a0-bc54-4e40-84a2-8d07c947e115,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da27193c9b48463feef2f63f1c6c4ab4f7a86d862dd5ec2771b84aa4378d48,PodSandboxId:718c039d8975bfba8c5bb2a0a25fb90c2daaf561de9dec0542f996e3d9ae7162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763199526961040976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-24wrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd25c45c-117e-4e43-9b51-356a8440a9d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58525f1b47d7521580ef4f8316ca8cda46ea084438178dd52831600c74e7ede,PodSandboxId:c6cba57a98418ba226bf5b9f58e023ac67be67756bb2a9612e036ac86bed63d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763199514405276094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2da6e84ec54def5de4867aac1e4d7272,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fdc67c7138c080f5634727fa8d4c5f48b3470bac3e76ec088ae54c11bd8fa0,PodSandboxId:0bf3b920153a3835ffd2aaafd2255578acc5df05c53ed6639170e1f9b02cd710,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763199514396884447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7513ca45fbdb5efe2e87eaf4733e3f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"co
ntainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906fb2769f9d34c9732a54ee33f422a5da4433428652e49cea3633ce339e889c,PodSandboxId:114d7b32a9cf746eec2c5ba81a1ab28d857a5da98b8b513a371eb33c4ce3e7cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763199514412968750,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a9e1e386f3ace000ff29104ff8763,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d350bfdbcdd1ea5830d2b5f6018ecae263c7dee769d06442f4fdc05e4feed2,PodSandboxId:07cb48831b5e87a3372960b51ad685a6e2a35b5717d2a670bb901b8e9bd86acb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763199514351776687,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965866,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 84ccf77541b0d39ec97b8568d896ca27,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a74201c4-6a86-482e-93f3-b4e535e79137 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.985935071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3271952-4b0e-4c15-8338-05f03bd83e55 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.986012998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3271952-4b0e-4c15-8338-05f03bd83e55 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.987317739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=318ba2c9-73b5-4973-87af-509100540674 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.988645483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763199805988618550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=318ba2c9-73b5-4973-87af-509100540674 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.989297044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85774187-8f77-4221-bf0b-d40323c0ae91 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.989357944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85774187-8f77-4221-bf0b-d40323c0ae91 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:25 addons-965866 crio[813]: time="2025-11-15 09:43:25.990005824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5927c374c4207196c2f87743b1b955a4b112cdcb086e1e034bee9bcb87ecaa3,PodSandboxId:d4b62459a033e03a0b2de9c327eca555414af6154e589a660de6465ca56f260f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763199664547690048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55acc31d-0a00-4815-9a94-f5347b56d0a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b9b159f68e209cb5d2730358f380af461510d79a821552df42bbe4b24d2adb,PodSandboxId:e9df416c7aff8d8e07d0a3ac5cfdfcc5c8fb62e708bf861731657dc692c9e2e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763199614991302905,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70a0ad0e-2065-49de-b086-eb86bff49a67,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4ad706986c3ecb976d14f60faad88913c52da94a8492968a7e9cfd9db0d401,PodSandboxId:bb502da9d2d1c80c845ef368137a19953b1336716767a04a4ffed86e12da3a21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763199604362384802,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-5bv6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2fc45e3-9f27-4af5-a57a-a8b8953e5472,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0d2fed49c049d69333f8f168280baa6c6720a2f568c6dc502a692756d0ff4735,PodSandboxId:bbe25b4d20ddb117240e93e70a77528862a33542c04b725de20208ea9480f0ca,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763199590736374277,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r96qz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4a782a5-344b-4aef-b4ba-bc6793d3993a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7877aee7ba93e1dd18d4aa7ba2f2f0ab72571993f70408f844cad93f2a2950,PodSandboxId:c99d949305e60e5c8758df4987a5197221f120ed3b8d1ac8e8d5703c788f1e85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763199575108646035,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xbjfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f47453f-53fa-40cf-a99f-292f867c4782,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0fa0866e733f75262f639cd9157b6fc8f6d3454e7e9c236c99a7fe3d1507408,PodSandboxId:7cf8a904fe28b431eef8bbfe0a396f9a4e0c13b8b8e07d94772ef8cf60d2514a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763199558803082933,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783ea184-7bfb-4c8a-8adc-1c0eb45bfbec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2cfeca18bee21e77d247bd8e702efa0d4e8bb4c6c93a359515d3678a42c1a4c,PodSandboxId:f9ba2596ad5a96455860ce38d743743070a282abb984f15aba242f1d031d49d9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763199533825056836,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cnlhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b3cde7-2c30-4f04-8df2-f40949aafb70,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee47b987d8e9b6511ea1b1d3f9d4cfa75832cba00924ddb4641faee9c4d2c2d,PodSandboxId:d1a4dcf2bad481e379e565219c948744cc87eaf44707f24180131023114c7e05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763199533458587805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f417ad-f47d-48cc-92d6-66db53b17151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a773a425284817ff4dd8291974f8d399eb43a71a493557c10b9576746be65c22,PodSandboxId:38a50791c13447963e97e0699aed101ac982109ff627b4b9220010384441cbc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1
fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763199527853015496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kft47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaf88a0-bc54-4e40-84a2-8d07c947e115,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da27193c9b48463feef2f63f1c6c4ab4f7a86d862dd5ec2771b84aa4378d48,PodSandboxId:718c039d8975bfba8c5bb2a0a25fb90c2daaf561de9dec0542f996e3d9ae7162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763199526961040976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-24wrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd25c45c-117e-4e43-9b51-356a8440a9d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58525f1b47d7521580ef4f8316ca8cda46ea084438178dd52831600c74e7ede,PodSandboxId:c6cba57a98418ba226bf5b9f58e023ac67be67756bb2a9612e036ac86bed63d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763199514405276094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2da6e84ec54def5de4867aac1e4d7272,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fdc67c7138c080f5634727fa8d4c5f48b3470bac3e76ec088ae54c11bd8fa0,PodSandboxId:0bf3b920153a3835ffd2aaafd2255578acc5df05c53ed6639170e1f9b02cd710,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763199514396884447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7513ca45fbdb5efe2e87eaf4733e3f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"co
ntainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906fb2769f9d34c9732a54ee33f422a5da4433428652e49cea3633ce339e889c,PodSandboxId:114d7b32a9cf746eec2c5ba81a1ab28d857a5da98b8b513a371eb33c4ce3e7cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763199514412968750,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a9e1e386f3ace000ff29104ff8763,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d350bfdbcdd1ea5830d2b5f6018ecae263c7dee769d06442f4fdc05e4feed2,PodSandboxId:07cb48831b5e87a3372960b51ad685a6e2a35b5717d2a670bb901b8e9bd86acb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763199514351776687,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965866,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 84ccf77541b0d39ec97b8568d896ca27,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85774187-8f77-4221-bf0b-d40323c0ae91 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.001641703Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.001846868Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.011970554Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=b3f7a05d-c72e-4632-ad35-bee555dc0d17 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.012077822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3f7a05d-c72e-4632-ad35-bee555dc0d17 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.032441940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94c4a547-05c3-445e-b407-0ad48e8ef0b3 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.032635114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94c4a547-05c3-445e-b407-0ad48e8ef0b3 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.036007493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14de2a1e-1f00-406f-8f14-3aee3b7a0834 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.037231320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763199806037203615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14de2a1e-1f00-406f-8f14-3aee3b7a0834 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.038197224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f32478ef-3982-4fa4-972f-038a4f39181e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.038276066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f32478ef-3982-4fa4-972f-038a4f39181e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:43:26 addons-965866 crio[813]: time="2025-11-15 09:43:26.038601722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5927c374c4207196c2f87743b1b955a4b112cdcb086e1e034bee9bcb87ecaa3,PodSandboxId:d4b62459a033e03a0b2de9c327eca555414af6154e589a660de6465ca56f260f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763199664547690048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55acc31d-0a00-4815-9a94-f5347b56d0a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b9b159f68e209cb5d2730358f380af461510d79a821552df42bbe4b24d2adb,PodSandboxId:e9df416c7aff8d8e07d0a3ac5cfdfcc5c8fb62e708bf861731657dc692c9e2e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763199614991302905,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70a0ad0e-2065-49de-b086-eb86bff49a67,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4ad706986c3ecb976d14f60faad88913c52da94a8492968a7e9cfd9db0d401,PodSandboxId:bb502da9d2d1c80c845ef368137a19953b1336716767a04a4ffed86e12da3a21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763199604362384802,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-5bv6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2fc45e3-9f27-4af5-a57a-a8b8953e5472,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0d2fed49c049d69333f8f168280baa6c6720a2f568c6dc502a692756d0ff4735,PodSandboxId:bbe25b4d20ddb117240e93e70a77528862a33542c04b725de20208ea9480f0ca,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763199590736374277,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r96qz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4a782a5-344b-4aef-b4ba-bc6793d3993a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7877aee7ba93e1dd18d4aa7ba2f2f0ab72571993f70408f844cad93f2a2950,PodSandboxId:c99d949305e60e5c8758df4987a5197221f120ed3b8d1ac8e8d5703c788f1e85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763199575108646035,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xbjfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f47453f-53fa-40cf-a99f-292f867c4782,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0fa0866e733f75262f639cd9157b6fc8f6d3454e7e9c236c99a7fe3d1507408,PodSandboxId:7cf8a904fe28b431eef8bbfe0a396f9a4e0c13b8b8e07d94772ef8cf60d2514a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763199558803082933,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783ea184-7bfb-4c8a-8adc-1c0eb45bfbec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2cfeca18bee21e77d247bd8e702efa0d4e8bb4c6c93a359515d3678a42c1a4c,PodSandboxId:f9ba2596ad5a96455860ce38d743743070a282abb984f15aba242f1d031d49d9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763199533825056836,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cnlhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b3cde7-2c30-4f04-8df2-f40949aafb70,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee47b987d8e9b6511ea1b1d3f9d4cfa75832cba00924ddb4641faee9c4d2c2d,PodSandboxId:d1a4dcf2bad481e379e565219c948744cc87eaf44707f24180131023114c7e05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763199533458587805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f417ad-f47d-48cc-92d6-66db53b17151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a773a425284817ff4dd8291974f8d399eb43a71a493557c10b9576746be65c22,PodSandboxId:38a50791c13447963e97e0699aed101ac982109ff627b4b9220010384441cbc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1
fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763199527853015496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kft47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaf88a0-bc54-4e40-84a2-8d07c947e115,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da27193c9b48463feef2f63f1c6c4ab4f7a86d862dd5ec2771b84aa4378d48,PodSandboxId:718c039d8975bfba8c5bb2a0a25fb90c2daaf561de9dec0542f996e3d9ae7162,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763199526961040976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-24wrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd25c45c-117e-4e43-9b51-356a8440a9d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58525f1b47d7521580ef4f8316ca8cda46ea084438178dd52831600c74e7ede,PodSandboxId:c6cba57a98418ba226bf5b9f58e023ac67be67756bb2a9612e036ac86bed63d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763199514405276094,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2da6e84ec54def5de4867aac1e4d7272,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fdc67c7138c080f5634727fa8d4c5f48b3470bac3e76ec088ae54c11bd8fa0,PodSandboxId:0bf3b920153a3835ffd2aaafd2255578acc5df05c53ed6639170e1f9b02cd710,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763199514396884447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7513ca45fbdb5efe2e87eaf4733e3f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"co
ntainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906fb2769f9d34c9732a54ee33f422a5da4433428652e49cea3633ce339e889c,PodSandboxId:114d7b32a9cf746eec2c5ba81a1ab28d857a5da98b8b513a371eb33c4ce3e7cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763199514412968750,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a9e1e386f3ace000ff29104ff8763,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d350bfdbcdd1ea5830d2b5f6018ecae263c7dee769d06442f4fdc05e4feed2,PodSandboxId:07cb48831b5e87a3372960b51ad685a6e2a35b5717d2a670bb901b8e9bd86acb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763199514351776687,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965866,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 84ccf77541b0d39ec97b8568d896ca27,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f32478ef-3982-4fa4-972f-038a4f39181e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5927c374c420       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   d4b62459a033e       nginx
	91b9b159f68e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   e9df416c7aff8       busybox
	4d4ad706986c3       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   bb502da9d2d1c       ingress-nginx-controller-6c8bf45fb-5bv6h
	0d2fed49c049d       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago       Exited              patch                     2                   bbe25b4d20ddb       ingress-nginx-admission-patch-r96qz
	1d7877aee7ba9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   c99d949305e60       ingress-nginx-admission-create-xbjfh
	e0fa0866e733f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   7cf8a904fe28b       kube-ingress-dns-minikube
	a2cfeca18bee2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f9ba2596ad5a9       amd-gpu-device-plugin-cnlhx
	3ee47b987d8e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d1a4dcf2bad48       storage-provisioner
	a773a42528481       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   38a50791c1344       kube-proxy-kft47
	16da27193c9b4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   718c039d8975b       coredns-66bc5c9577-24wrk
	906fb2769f9d3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   114d7b32a9cf7       etcd-addons-965866
	b58525f1b47d7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   c6cba57a98418       kube-scheduler-addons-965866
	16fdc67c7138c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   0bf3b920153a3       kube-apiserver-addons-965866
	f9d350bfdbcdd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   07cb48831b5e8       kube-controller-manager-addons-965866
	
	
	==> coredns [16da27193c9b48463feef2f63f1c6c4ab4f7a86d862dd5ec2771b84aa4378d48] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:37694 - 51840 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00117014s
	[INFO] 10.244.0.23:54537 - 49270 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017451s
	[INFO] 10.244.0.23:49388 - 19421 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100618s
	[INFO] 10.244.0.23:46700 - 23081 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000355606s
	[INFO] 10.244.0.23:41898 - 43350 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091127s
	[INFO] 10.244.0.23:56622 - 32407 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127074s
	[INFO] 10.244.0.23:46079 - 50957 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001267626s
	[INFO] 10.244.0.23:60394 - 1797 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00349666s
	[INFO] 10.244.0.28:38266 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000315483s
	[INFO] 10.244.0.28:44038 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188209s
	
	
	==> describe nodes <==
	Name:               addons-965866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-965866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=addons-965866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_38_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-965866
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:38:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-965866
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:43:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:41:12 +0000   Sat, 15 Nov 2025 09:38:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:41:12 +0000   Sat, 15 Nov 2025 09:38:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:41:12 +0000   Sat, 15 Nov 2025 09:38:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:41:12 +0000   Sat, 15 Nov 2025 09:38:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    addons-965866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 267ed3b390304a58989fff9e874a9a56
	  System UUID:                267ed3b3-9030-4a58-989f-ff9e874a9a56
	  Boot ID:                    7f683a10-c74a-4799-a368-93a1a0a034ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     hello-world-app-5d498dc89-sq4cf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-5bv6h    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m33s
	  kube-system                 amd-gpu-device-plugin-cnlhx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 coredns-66bc5c9577-24wrk                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m41s
	  kube-system                 etcd-addons-965866                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m47s
	  kube-system                 kube-apiserver-addons-965866                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-addons-965866       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-proxy-kft47                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-965866                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m37s  kube-proxy       
	  Normal  Starting                 4m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s  kubelet          Node addons-965866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s  kubelet          Node addons-965866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s  kubelet          Node addons-965866 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m46s  kubelet          Node addons-965866 status is now: NodeReady
	  Normal  RegisteredNode           4m43s  node-controller  Node addons-965866 event: Registered Node addons-965866 in Controller
	
	
	==> dmesg <==
	[  +1.301828] kauditd_printk_skb: 374 callbacks suppressed
	[Nov15 09:39] kauditd_printk_skb: 281 callbacks suppressed
	[ +12.242575] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.451021] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.862866] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.018434] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.494254] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.733119] kauditd_printk_skb: 200 callbacks suppressed
	[  +0.002071] kauditd_printk_skb: 109 callbacks suppressed
	[Nov15 09:40] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.339928] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.450227] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.582759] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.904769] kauditd_printk_skb: 65 callbacks suppressed
	[  +1.071694] kauditd_printk_skb: 152 callbacks suppressed
	[  +1.852093] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.021758] kauditd_printk_skb: 96 callbacks suppressed
	[  +2.443246] kauditd_printk_skb: 62 callbacks suppressed
	[Nov15 09:41] kauditd_printk_skb: 102 callbacks suppressed
	[  +3.961974] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.818438] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 42 callbacks suppressed
	[  +7.539840] kauditd_printk_skb: 41 callbacks suppressed
	[Nov15 09:43] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [906fb2769f9d34c9732a54ee33f422a5da4433428652e49cea3633ce339e889c] <==
	{"level":"info","ts":"2025-11-15T09:39:29.722960Z","caller":"traceutil/trace.go:172","msg":"trace[1214316924] transaction","detail":"{read_only:false; response_revision:969; number_of_response:1; }","duration":"116.194866ms","start":"2025-11-15T09:39:29.606752Z","end":"2025-11-15T09:39:29.722947Z","steps":["trace[1214316924] 'process raft request'  (duration: 116.095953ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:39:31.997439Z","caller":"traceutil/trace.go:172","msg":"trace[299218330] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"214.652933ms","start":"2025-11-15T09:39:31.782773Z","end":"2025-11-15T09:39:31.997426Z","steps":["trace[299218330] 'process raft request'  (duration: 214.397111ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:39:34.853179Z","caller":"traceutil/trace.go:172","msg":"trace[901257441] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"177.607222ms","start":"2025-11-15T09:39:34.675556Z","end":"2025-11-15T09:39:34.853163Z","steps":["trace[901257441] 'process raft request'  (duration: 177.152767ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:39:44.283229Z","caller":"traceutil/trace.go:172","msg":"trace[679943924] transaction","detail":"{read_only:false; response_revision:1029; number_of_response:1; }","duration":"185.925359ms","start":"2025-11-15T09:39:44.097289Z","end":"2025-11-15T09:39:44.283214Z","steps":["trace[679943924] 'process raft request'  (duration: 185.829799ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:39:46.504545Z","caller":"traceutil/trace.go:172","msg":"trace[1578864879] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"209.5972ms","start":"2025-11-15T09:39:46.294934Z","end":"2025-11-15T09:39:46.504531Z","steps":["trace[1578864879] 'process raft request'  (duration: 209.461659ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:39:50.711914Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.686051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-15T09:39:50.711990Z","caller":"traceutil/trace.go:172","msg":"trace[524501749] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1088; }","duration":"189.801059ms","start":"2025-11-15T09:39:50.522175Z","end":"2025-11-15T09:39:50.711976Z","steps":["trace[524501749] 'range keys from in-memory index tree'  (duration: 189.501143ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:39:52.187655Z","caller":"traceutil/trace.go:172","msg":"trace[636172398] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"101.034104ms","start":"2025-11-15T09:39:52.086607Z","end":"2025-11-15T09:39:52.187641Z","steps":["trace[636172398] 'process raft request'  (duration: 100.955253ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:40:04.076263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.94684ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17545912577357803451 > lease_revoke:<id:737f9a86e1786ea1>","response":"size:27"}
	{"level":"info","ts":"2025-11-15T09:40:04.076916Z","caller":"traceutil/trace.go:172","msg":"trace[1342908586] linearizableReadLoop","detail":"{readStateIndex:1180; appliedIndex:1179; }","duration":"292.337344ms","start":"2025-11-15T09:40:03.784565Z","end":"2025-11-15T09:40:04.076902Z","steps":["trace[1342908586] 'read index received'  (duration: 42.64925ms)","trace[1342908586] 'applied index is now lower than readState.Index'  (duration: 249.68704ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T09:40:04.077025Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"292.456912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:40:04.077045Z","caller":"traceutil/trace.go:172","msg":"trace[2038147501] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"292.485246ms","start":"2025-11-15T09:40:03.784554Z","end":"2025-11-15T09:40:04.077039Z","steps":["trace[2038147501] 'agreement among raft nodes before linearized reading'  (duration: 292.435933ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:40:04.079232Z","caller":"traceutil/trace.go:172","msg":"trace[183929958] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"336.425432ms","start":"2025-11-15T09:40:03.742797Z","end":"2025-11-15T09:40:04.079223Z","steps":["trace[183929958] 'process raft request'  (duration: 333.737026ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:40:04.079319Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T09:40:03.742748Z","time spent":"336.529435ms","remote":"127.0.0.1:55560","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":834,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-78565c9fb4-wqhzg.187824b2a0d83146\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-78565c9fb4-wqhzg.187824b2a0d83146\" value_size:748 lease:8322540540503027185 >> failure:<>"}
	{"level":"warn","ts":"2025-11-15T09:40:04.086050Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.177751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:40:04.086316Z","caller":"traceutil/trace.go:172","msg":"trace[1148618178] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1148; }","duration":"190.453852ms","start":"2025-11-15T09:40:03.895853Z","end":"2025-11-15T09:40:04.086307Z","steps":["trace[1148618178] 'agreement among raft nodes before linearized reading'  (duration: 183.560792ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:40:04.086239Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"242.000027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:40:04.086734Z","caller":"traceutil/trace.go:172","msg":"trace[1325996420] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:1148; }","duration":"242.469583ms","start":"2025-11-15T09:40:03.844226Z","end":"2025-11-15T09:40:04.086696Z","steps":["trace[1325996420] 'agreement among raft nodes before linearized reading'  (duration: 234.983934ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:40:04.086271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.161826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:40:04.087084Z","caller":"traceutil/trace.go:172","msg":"trace[1608035658] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1148; }","duration":"139.967988ms","start":"2025-11-15T09:40:03.947107Z","end":"2025-11-15T09:40:04.087075Z","steps":["trace[1608035658] 'agreement among raft nodes before linearized reading'  (duration: 132.299538ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:40:13.573747Z","caller":"traceutil/trace.go:172","msg":"trace[539012641] linearizableReadLoop","detail":"{readStateIndex:1236; appliedIndex:1236; }","duration":"124.321376ms","start":"2025-11-15T09:40:13.449381Z","end":"2025-11-15T09:40:13.573703Z","steps":["trace[539012641] 'read index received'  (duration: 124.315416ms)","trace[539012641] 'applied index is now lower than readState.Index'  (duration: 5.129µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T09:40:13.573922Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.521675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:40:13.573942Z","caller":"traceutil/trace.go:172","msg":"trace[7207857] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1202; }","duration":"124.558331ms","start":"2025-11-15T09:40:13.449378Z","end":"2025-11-15T09:40:13.573936Z","steps":["trace[7207857] 'agreement among raft nodes before linearized reading'  (duration: 124.490009ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:40:13.574001Z","caller":"traceutil/trace.go:172","msg":"trace[1223409363] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"163.059159ms","start":"2025-11-15T09:40:13.410929Z","end":"2025-11-15T09:40:13.573989Z","steps":["trace[1223409363] 'process raft request'  (duration: 162.940937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:40:44.685907Z","caller":"traceutil/trace.go:172","msg":"trace[295123810] transaction","detail":"{read_only:false; response_revision:1420; number_of_response:1; }","duration":"157.031985ms","start":"2025-11-15T09:40:44.528862Z","end":"2025-11-15T09:40:44.685894Z","steps":["trace[295123810] 'process raft request'  (duration: 156.931939ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:43:26 up 5 min,  0 users,  load average: 0.43, 0.96, 0.51
	Linux addons-965866 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [16fdc67c7138c080f5634727fa8d4c5f48b3470bac3e76ec088ae54c11bd8fa0] <==
	E1115 09:39:29.810991       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.39.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.39.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.39.37:443: connect: connection refused" logger="UnhandledError"
	E1115 09:39:29.817138       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.39.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.39.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.39.37:443: connect: connection refused" logger="UnhandledError"
	I1115 09:39:29.927911       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 09:40:21.409832       1 conn.go:339] Error on socket receive: read tcp 192.168.39.252:8443->192.168.39.1:53864: use of closed network connection
	E1115 09:40:21.602180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.252:8443->192.168.39.1:53900: use of closed network connection
	I1115 09:40:36.674651       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.61.65"}
	I1115 09:40:58.253315       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 09:40:58.499442       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.25.34"}
	E1115 09:40:59.936312       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1115 09:41:07.254247       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1115 09:41:30.831868       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1115 09:41:31.204206       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:41:31.204269       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:41:31.231944       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:41:31.232192       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:41:31.246743       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:41:31.246830       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:41:31.272818       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:41:31.272873       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:41:31.282346       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:41:31.282393       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1115 09:41:32.248789       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1115 09:41:32.282668       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1115 09:41:32.310529       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1115 09:43:24.894627       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.152.5"}
	
	
	==> kube-controller-manager [f9d350bfdbcdd1ea5830d2b5f6018ecae263c7dee769d06442f4fdc05e4feed2] <==
	E1115 09:41:40.591688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:41:41.017258       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:41:41.018388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1115 09:41:44.158852       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:41:44.158995       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:41:44.357775       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:41:44.357818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:41:47.797683       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:41:47.798602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:41:48.802035       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:41:48.803440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:41:50.614012       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:41:50.615268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:42:02.141254       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:42:02.142330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:42:07.736605       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:42:07.737957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:42:15.384998       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:42:15.386114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:42:32.070132       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:42:32.071496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:42:41.712746       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:42:41.714025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:42:54.938938       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:42:54.940101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a773a425284817ff4dd8291974f8d399eb43a71a493557c10b9576746be65c22] <==
	I1115 09:38:48.691569       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:38:48.813717       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:38:48.814669       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.252"]
	E1115 09:38:48.827308       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:38:49.090603       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1115 09:38:49.091356       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1115 09:38:49.091389       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:38:49.137273       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:38:49.139499       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:38:49.139528       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:38:49.158166       1 config.go:200] "Starting service config controller"
	I1115 09:38:49.158181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:38:49.158200       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:38:49.158203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:38:49.158258       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:38:49.158262       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:38:49.168866       1 config.go:309] "Starting node config controller"
	I1115 09:38:49.169273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:38:49.169281       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:38:49.258836       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:38:49.258934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:38:49.258949       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b58525f1b47d7521580ef4f8316ca8cda46ea084438178dd52831600c74e7ede] <==
	E1115 09:38:37.001226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:38:37.001268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:38:37.001322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:38:37.001358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:38:37.001400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:38:37.010185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:38:37.016747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:38:37.016797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:38:37.016831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:38:37.016860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:38:37.017048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:38:37.017093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:38:37.822573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:38:37.822746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:38:37.855571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:38:37.869143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:38:37.945791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:38:37.957166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:38:38.031828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:38:38.051388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:38:38.062120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:38:38.064609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:38:38.155862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:38:38.215137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1115 09:38:39.479553       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:41:46 addons-965866 kubelet[1506]: I1115 09:41:46.703382    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:41:49 addons-965866 kubelet[1506]: E1115 09:41:49.991284    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199709990839543  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:41:49 addons-965866 kubelet[1506]: E1115 09:41:49.991324    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199709990839543  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:41:59 addons-965866 kubelet[1506]: E1115 09:41:59.994149    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199719993536025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:41:59 addons-965866 kubelet[1506]: E1115 09:41:59.994181    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199719993536025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:09 addons-965866 kubelet[1506]: E1115 09:42:09.997589    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199729997063380  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:09 addons-965866 kubelet[1506]: E1115 09:42:09.997638    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199729997063380  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:20 addons-965866 kubelet[1506]: E1115 09:42:20.000579    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199740000106515  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:20 addons-965866 kubelet[1506]: E1115 09:42:20.000605    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199740000106515  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:30 addons-965866 kubelet[1506]: E1115 09:42:30.003488    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199750003039210  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:30 addons-965866 kubelet[1506]: E1115 09:42:30.003532    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199750003039210  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:40 addons-965866 kubelet[1506]: E1115 09:42:40.006304    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199760005906450  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:40 addons-965866 kubelet[1506]: E1115 09:42:40.006349    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199760005906450  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:50 addons-965866 kubelet[1506]: E1115 09:42:50.009655    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199770009166123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:50 addons-965866 kubelet[1506]: E1115 09:42:50.009683    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199770009166123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:42:54 addons-965866 kubelet[1506]: I1115 09:42:54.703662    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cnlhx" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:42:58 addons-965866 kubelet[1506]: I1115 09:42:58.704041    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-24wrk" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:43:00 addons-965866 kubelet[1506]: E1115 09:43:00.012331    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199780011862823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:43:00 addons-965866 kubelet[1506]: E1115 09:43:00.012376    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199780011862823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:43:08 addons-965866 kubelet[1506]: I1115 09:43:08.703538    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:43:10 addons-965866 kubelet[1506]: E1115 09:43:10.014607    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199790014231977  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:43:10 addons-965866 kubelet[1506]: E1115 09:43:10.014630    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199790014231977  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:43:20 addons-965866 kubelet[1506]: E1115 09:43:20.017153    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763199800016673564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:43:20 addons-965866 kubelet[1506]: E1115 09:43:20.017196    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763199800016673564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:43:24 addons-965866 kubelet[1506]: I1115 09:43:24.985764    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrsc\" (UniqueName: \"kubernetes.io/projected/6027fb4a-c8d0-41f8-bfee-d4d7c61890b8-kube-api-access-nbrsc\") pod \"hello-world-app-5d498dc89-sq4cf\" (UID: \"6027fb4a-c8d0-41f8-bfee-d4d7c61890b8\") " pod="default/hello-world-app-5d498dc89-sq4cf"
	
	
	==> storage-provisioner [3ee47b987d8e9b6511ea1b1d3f9d4cfa75832cba00924ddb4641faee9c4d2c2d] <==
	W1115 09:43:01.941290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:03.945565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:03.957101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:05.961888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:05.967677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:07.971267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:07.976684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:09.983774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:09.992206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:11.995920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:12.001520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:14.004546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:14.009751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:16.013128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:16.020584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:18.023847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:18.029329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:20.032691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:20.040682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:22.044434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:22.049413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:24.053331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:24.061243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:26.065789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:43:26.074843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-965866 -n addons-965866
helpers_test.go:269: (dbg) Run:  kubectl --context addons-965866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sq4cf ingress-nginx-admission-create-xbjfh ingress-nginx-admission-patch-r96qz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-965866 describe pod hello-world-app-5d498dc89-sq4cf ingress-nginx-admission-create-xbjfh ingress-nginx-admission-patch-r96qz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-965866 describe pod hello-world-app-5d498dc89-sq4cf ingress-nginx-admission-create-xbjfh ingress-nginx-admission-patch-r96qz: exit status 1 (85.482457ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-sq4cf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-965866/192.168.39.252
	Start Time:       Sat, 15 Nov 2025 09:43:24 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbrsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nbrsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-sq4cf to addons-965866
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xbjfh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-r96qz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-965866 describe pod hello-world-app-5d498dc89-sq4cf ingress-nginx-admission-create-xbjfh ingress-nginx-admission-patch-r96qz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable ingress-dns --alsologtostderr -v=1: (1.020802509s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable ingress --alsologtostderr -v=1: (7.748159175s)
--- FAIL: TestAddons/parallel/Ingress (157.99s)

                                                
                                    
x
+
TestPreload (126.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-934107 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1115 10:26:15.058585  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-934107 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m6.507467319s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934107 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-934107 image pull gcr.io/k8s-minikube/busybox: (3.668542353s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-934107
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-934107: (7.164596929s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-934107 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-934107 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (46.003947806s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934107 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-15 10:27:48.553386408 +0000 UTC m=+2999.108025286
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-934107 -n test-preload-934107
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934107 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-934107 logs -n 25: (1.132366247s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-998010 ssh -n multinode-998010-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ ssh     │ multinode-998010 ssh -n multinode-998010 sudo cat /home/docker/cp-test_multinode-998010-m03_multinode-998010.txt                                          │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ cp      │ multinode-998010 cp multinode-998010-m03:/home/docker/cp-test.txt multinode-998010-m02:/home/docker/cp-test_multinode-998010-m03_multinode-998010-m02.txt │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ ssh     │ multinode-998010 ssh -n multinode-998010-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ ssh     │ multinode-998010 ssh -n multinode-998010-m02 sudo cat /home/docker/cp-test_multinode-998010-m03_multinode-998010-m02.txt                                  │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ node    │ multinode-998010 node stop m03                                                                                                                            │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ node    │ multinode-998010 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:15 UTC │
	│ node    │ list -p multinode-998010                                                                                                                                  │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │                     │
	│ stop    │ -p multinode-998010                                                                                                                                       │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:15 UTC │ 15 Nov 25 10:18 UTC │
	│ start   │ -p multinode-998010 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:18 UTC │ 15 Nov 25 10:20 UTC │
	│ node    │ list -p multinode-998010                                                                                                                                  │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:20 UTC │                     │
	│ node    │ multinode-998010 node delete m03                                                                                                                          │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:20 UTC │ 15 Nov 25 10:20 UTC │
	│ stop    │ multinode-998010 stop                                                                                                                                     │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:20 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p multinode-998010 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:25 UTC │
	│ node    │ list -p multinode-998010                                                                                                                                  │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │                     │
	│ start   │ -p multinode-998010-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-998010-m02 │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │                     │
	│ start   │ -p multinode-998010-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-998010-m03 │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ node    │ add -p multinode-998010                                                                                                                                   │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │                     │
	│ delete  │ -p multinode-998010-m03                                                                                                                                   │ multinode-998010-m03 │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ delete  │ -p multinode-998010                                                                                                                                       │ multinode-998010     │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ start   │ -p test-preload-934107 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-934107  │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:26 UTC │
	│ image   │ test-preload-934107 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-934107  │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:26 UTC │
	│ stop    │ -p test-preload-934107                                                                                                                                    │ test-preload-934107  │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p test-preload-934107 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-934107  │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	│ image   │ test-preload-934107 image list                                                                                                                            │ test-preload-934107  │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:27:02
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:27:02.407879  439257 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:27:02.408149  439257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:02.408158  439257 out.go:374] Setting ErrFile to fd 2...
	I1115 10:27:02.408162  439257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:02.408368  439257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:27:02.408835  439257 out.go:368] Setting JSON to false
	I1115 10:27:02.409761  439257 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7769,"bootTime":1763194653,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:27:02.409858  439257 start.go:143] virtualization: kvm guest
	I1115 10:27:02.411932  439257 out.go:179] * [test-preload-934107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:27:02.413007  439257 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:27:02.413053  439257 notify.go:221] Checking for updates...
	I1115 10:27:02.415096  439257 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:27:02.416369  439257 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:27:02.417680  439257 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 10:27:02.418923  439257 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:27:02.423249  439257 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:27:02.424918  439257 config.go:182] Loaded profile config "test-preload-934107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1115 10:27:02.426518  439257 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 10:27:02.427680  439257 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:27:02.465610  439257 out.go:179] * Using the kvm2 driver based on existing profile
	I1115 10:27:02.466713  439257 start.go:309] selected driver: kvm2
	I1115 10:27:02.466731  439257 start.go:930] validating driver "kvm2" against &{Name:test-preload-934107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-934107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:02.466828  439257 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:27:02.467983  439257 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:27:02.468022  439257 cni.go:84] Creating CNI manager for ""
	I1115 10:27:02.468073  439257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:27:02.468139  439257 start.go:353] cluster config:
	{Name:test-preload-934107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-934107 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:02.468235  439257 iso.go:125] acquiring lock: {Name:mke3d0b50f750b07aabde39a6bc9fa707eafad32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:02.469721  439257 out.go:179] * Starting "test-preload-934107" primary control-plane node in "test-preload-934107" cluster
	I1115 10:27:02.470865  439257 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1115 10:27:02.567247  439257 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1115 10:27:02.567300  439257 cache.go:65] Caching tarball of preloaded images
	I1115 10:27:02.567481  439257 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1115 10:27:02.569360  439257 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1115 10:27:02.570642  439257 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 10:27:02.596184  439257 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1115 10:27:02.596257  439257 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1115 10:27:05.620642  439257 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1115 10:27:05.620798  439257 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/config.json ...
	I1115 10:27:05.621038  439257 start.go:360] acquireMachinesLock for test-preload-934107: {Name:mk50d09d451dfb6834d3dcf4331d8b4da7231bd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 10:27:05.621108  439257 start.go:364] duration metric: took 44.835µs to acquireMachinesLock for "test-preload-934107"
	I1115 10:27:05.621121  439257 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:27:05.621127  439257 fix.go:54] fixHost starting: 
	I1115 10:27:05.623234  439257 fix.go:112] recreateIfNeeded on test-preload-934107: state=Stopped err=<nil>
	W1115 10:27:05.623268  439257 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:27:05.624962  439257 out.go:252] * Restarting existing kvm2 VM for "test-preload-934107" ...
	I1115 10:27:05.624996  439257 main.go:143] libmachine: starting domain...
	I1115 10:27:05.625007  439257 main.go:143] libmachine: ensuring networks are active...
	I1115 10:27:05.625871  439257 main.go:143] libmachine: Ensuring network default is active
	I1115 10:27:05.626234  439257 main.go:143] libmachine: Ensuring network mk-test-preload-934107 is active
	I1115 10:27:05.626599  439257 main.go:143] libmachine: getting domain XML...
	I1115 10:27:05.627889  439257 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-934107</name>
	  <uuid>948f8273-ed7f-4bd3-a8b0-0dc23f15d493</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/test-preload-934107.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:9a:4a:2f'/>
	      <source network='mk-test-preload-934107'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ea:84:80'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1115 10:27:06.894526  439257 main.go:143] libmachine: waiting for domain to start...
	I1115 10:27:06.896044  439257 main.go:143] libmachine: domain is now running
	I1115 10:27:06.896070  439257 main.go:143] libmachine: waiting for IP...
	I1115 10:27:06.896963  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:06.897498  439257 main.go:143] libmachine: domain test-preload-934107 has current primary IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:06.897516  439257 main.go:143] libmachine: found domain IP: 192.168.39.107
	I1115 10:27:06.897524  439257 main.go:143] libmachine: reserving static IP address...
	I1115 10:27:06.897968  439257 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-934107", mac: "52:54:00:9a:4a:2f", ip: "192.168.39.107"} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:26:00 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:06.898006  439257 main.go:143] libmachine: skip adding static IP to network mk-test-preload-934107 - found existing host DHCP lease matching {name: "test-preload-934107", mac: "52:54:00:9a:4a:2f", ip: "192.168.39.107"}
	I1115 10:27:06.898025  439257 main.go:143] libmachine: reserved static IP address 192.168.39.107 for domain test-preload-934107
	I1115 10:27:06.898041  439257 main.go:143] libmachine: waiting for SSH...
	I1115 10:27:06.898053  439257 main.go:143] libmachine: Getting to WaitForSSH function...
	I1115 10:27:06.900770  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:06.901178  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:26:00 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:06.901210  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:06.901388  439257 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:06.901802  439257 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1115 10:27:06.901825  439257 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1115 10:27:09.992031  439257 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I1115 10:27:16.072107  439257 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I1115 10:27:19.184465  439257 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:27:19.188442  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.189037  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.189074  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.189337  439257 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/config.json ...
	I1115 10:27:19.189556  439257 machine.go:94] provisionDockerMachine start ...
	I1115 10:27:19.191695  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.192034  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.192059  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.192194  439257 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:19.192372  439257 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1115 10:27:19.192381  439257 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:27:19.303745  439257 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1115 10:27:19.303801  439257 buildroot.go:166] provisioning hostname "test-preload-934107"
	I1115 10:27:19.306933  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.307378  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.307408  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.307590  439257 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:19.307846  439257 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1115 10:27:19.307863  439257 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-934107 && echo "test-preload-934107" | sudo tee /etc/hostname
	I1115 10:27:19.437498  439257 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-934107
	
	I1115 10:27:19.440623  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.441067  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.441099  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.441298  439257 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:19.441509  439257 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1115 10:27:19.441531  439257 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-934107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-934107/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-934107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:27:19.561817  439257 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:27:19.561848  439257 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 10:27:19.561884  439257 buildroot.go:174] setting up certificates
	I1115 10:27:19.561906  439257 provision.go:84] configureAuth start
	I1115 10:27:19.565109  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.565749  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.565787  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.568341  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.568760  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.568786  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.568968  439257 provision.go:143] copyHostCerts
	I1115 10:27:19.569051  439257 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem, removing ...
	I1115 10:27:19.569073  439257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem
	I1115 10:27:19.569162  439257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 10:27:19.569295  439257 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem, removing ...
	I1115 10:27:19.569308  439257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem
	I1115 10:27:19.569354  439257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 10:27:19.569453  439257 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem, removing ...
	I1115 10:27:19.569464  439257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem
	I1115 10:27:19.569504  439257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 10:27:19.569585  439257 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.test-preload-934107 san=[127.0.0.1 192.168.39.107 localhost minikube test-preload-934107]
	I1115 10:27:19.724940  439257 provision.go:177] copyRemoteCerts
	I1115 10:27:19.725021  439257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:27:19.727564  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.727918  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.727945  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.728109  439257 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/id_rsa Username:docker}
	I1115 10:27:19.827759  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:27:19.866884  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:27:19.898348  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:27:19.931906  439257 provision.go:87] duration metric: took 369.980006ms to configureAuth
	I1115 10:27:19.931941  439257 buildroot.go:189] setting minikube options for container-runtime
	I1115 10:27:19.932142  439257 config.go:182] Loaded profile config "test-preload-934107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1115 10:27:19.934770  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.935166  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:19.935195  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:19.935346  439257 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:19.935572  439257 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1115 10:27:19.935606  439257 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:27:20.185925  439257 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:27:20.185956  439257 machine.go:97] duration metric: took 996.384538ms to provisionDockerMachine
	I1115 10:27:20.185971  439257 start.go:293] postStartSetup for "test-preload-934107" (driver="kvm2")
	I1115 10:27:20.185984  439257 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:27:20.186065  439257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:27:20.188600  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.188995  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:20.189032  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.189170  439257 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/id_rsa Username:docker}
	I1115 10:27:20.276338  439257 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:27:20.281299  439257 info.go:137] Remote host: Buildroot 2025.02
	I1115 10:27:20.281332  439257 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/addons for local assets ...
	I1115 10:27:20.281413  439257 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/files for local assets ...
	I1115 10:27:20.281512  439257 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem -> 4168012.pem in /etc/ssl/certs
	I1115 10:27:20.281632  439257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:27:20.293278  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem --> /etc/ssl/certs/4168012.pem (1708 bytes)
	I1115 10:27:20.324391  439257 start.go:296] duration metric: took 138.399598ms for postStartSetup
	I1115 10:27:20.324452  439257 fix.go:56] duration metric: took 14.70332338s for fixHost
	I1115 10:27:20.327649  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.328173  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:20.328212  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.328429  439257 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:20.328682  439257 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1115 10:27:20.328697  439257 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 10:27:20.438296  439257 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763202440.400246433
	
	I1115 10:27:20.438335  439257 fix.go:216] guest clock: 1763202440.400246433
	I1115 10:27:20.438348  439257 fix.go:229] Guest: 2025-11-15 10:27:20.400246433 +0000 UTC Remote: 2025-11-15 10:27:20.324458067 +0000 UTC m=+17.966655800 (delta=75.788366ms)
	I1115 10:27:20.438376  439257 fix.go:200] guest clock delta is within tolerance: 75.788366ms
	I1115 10:27:20.438385  439257 start.go:83] releasing machines lock for "test-preload-934107", held for 14.817266692s
	I1115 10:27:20.441530  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.442034  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:20.442067  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.442644  439257 ssh_runner.go:195] Run: cat /version.json
	I1115 10:27:20.442742  439257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:27:20.445877  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.446126  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.446352  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:20.446386  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.446507  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:20.446532  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:20.446564  439257 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/id_rsa Username:docker}
	I1115 10:27:20.446735  439257 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/id_rsa Username:docker}
	I1115 10:27:20.526457  439257 ssh_runner.go:195] Run: systemctl --version
	I1115 10:27:20.551195  439257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:27:20.694527  439257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:27:20.701767  439257 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:27:20.701840  439257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:27:20.721633  439257 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:27:20.721686  439257 start.go:496] detecting cgroup driver to use...
	I1115 10:27:20.721756  439257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:27:20.741074  439257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:27:20.758605  439257 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:27:20.758709  439257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:27:20.776266  439257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:27:20.792689  439257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:27:20.940728  439257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:27:21.157428  439257 docker.go:234] disabling docker service ...
	I1115 10:27:21.157505  439257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:27:21.174339  439257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:27:21.190768  439257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:27:21.353354  439257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:27:21.500494  439257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:27:21.517272  439257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:27:21.539915  439257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1115 10:27:21.539984  439257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.552717  439257 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:27:21.552790  439257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.565498  439257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.578514  439257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.591570  439257 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:27:21.605335  439257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.618224  439257 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.639160  439257 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:21.652399  439257 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:27:21.663104  439257 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 10:27:21.663190  439257 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 10:27:21.683826  439257 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:27:21.696462  439257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:21.843944  439257 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:27:21.952088  439257 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:27:21.952179  439257 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:27:21.958050  439257 start.go:564] Will wait 60s for crictl version
	I1115 10:27:21.958122  439257 ssh_runner.go:195] Run: which crictl
	I1115 10:27:21.962393  439257 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 10:27:21.999998  439257 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 10:27:22.000101  439257 ssh_runner.go:195] Run: crio --version
	I1115 10:27:22.028781  439257 ssh_runner.go:195] Run: crio --version
	I1115 10:27:22.060030  439257 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1115 10:27:22.064102  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:22.064471  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:22.064494  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:22.064671  439257 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:22.068964  439257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:27:22.084448  439257 kubeadm.go:884] updating cluster {Name:test-preload-934107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-934107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:22.084559  439257 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1115 10:27:22.084598  439257 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:22.121966  439257 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1115 10:27:22.122065  439257 ssh_runner.go:195] Run: which lz4
	I1115 10:27:22.126351  439257 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1115 10:27:22.131504  439257 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1115 10:27:22.131536  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1115 10:27:23.563225  439257 crio.go:462] duration metric: took 1.436902907s to copy over tarball
	I1115 10:27:23.563325  439257 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1115 10:27:25.229874  439257 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.666511309s)
	I1115 10:27:25.229913  439257 crio.go:469] duration metric: took 1.666651626s to extract the tarball
	I1115 10:27:25.229921  439257 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1115 10:27:25.270944  439257 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:25.317340  439257 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:25.317364  439257 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:25.317372  439257 kubeadm.go:935] updating node { 192.168.39.107 8443 v1.32.0 crio true true} ...
	I1115 10:27:25.317476  439257 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-934107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-934107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:25.317544  439257 ssh_runner.go:195] Run: crio config
	I1115 10:27:25.364117  439257 cni.go:84] Creating CNI manager for ""
	I1115 10:27:25.364144  439257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:27:25.364161  439257 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:25.364185  439257 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-934107 NodeName:test-preload-934107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:25.364311  439257 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-934107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.107"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:25.364374  439257 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1115 10:27:25.376955  439257 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:25.377047  439257 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:25.389392  439257 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1115 10:27:25.410738  439257 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:25.431391  439257 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1115 10:27:25.452071  439257 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:25.456394  439257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:27:25.471148  439257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:25.617931  439257 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:25.659036  439257 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107 for IP: 192.168.39.107
	I1115 10:27:25.659068  439257 certs.go:195] generating shared ca certs ...
	I1115 10:27:25.659094  439257 certs.go:227] acquiring lock for ca certs: {Name:mk02a14faa29b024d0296173a778127e8da9e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:25.659300  439257 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key
	I1115 10:27:25.659359  439257 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key
	I1115 10:27:25.659375  439257 certs.go:257] generating profile certs ...
	I1115 10:27:25.659508  439257 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.key
	I1115 10:27:25.659587  439257 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/apiserver.key.c8ad4afc
	I1115 10:27:25.659634  439257 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/proxy-client.key
	I1115 10:27:25.659816  439257 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/416801.pem (1338 bytes)
	W1115 10:27:25.659864  439257 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-412813/.minikube/certs/416801_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:25.659887  439257 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:27:25.659922  439257 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:25.659956  439257 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:25.659991  439257 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem (1675 bytes)
	I1115 10:27:25.660055  439257 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem (1708 bytes)
	I1115 10:27:25.660873  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:25.703133  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:25.740308  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:25.770283  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:25.800169  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:27:25.829720  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:27:25.859361  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:25.888602  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:27:25.919001  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/416801.pem --> /usr/share/ca-certificates/416801.pem (1338 bytes)
	I1115 10:27:25.949006  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem --> /usr/share/ca-certificates/4168012.pem (1708 bytes)
	I1115 10:27:25.978770  439257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:26.009448  439257 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:26.031344  439257 ssh_runner.go:195] Run: openssl version
	I1115 10:27:26.038139  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/416801.pem && ln -fs /usr/share/ca-certificates/416801.pem /etc/ssl/certs/416801.pem"
	I1115 10:27:26.051609  439257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/416801.pem
	I1115 10:27:26.057009  439257 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:45 /usr/share/ca-certificates/416801.pem
	I1115 10:27:26.057065  439257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/416801.pem
	I1115 10:27:26.064772  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/416801.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:26.079758  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4168012.pem && ln -fs /usr/share/ca-certificates/4168012.pem /etc/ssl/certs/4168012.pem"
	I1115 10:27:26.093293  439257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168012.pem
	I1115 10:27:26.099172  439257 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:45 /usr/share/ca-certificates/4168012.pem
	I1115 10:27:26.099253  439257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168012.pem
	I1115 10:27:26.107128  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4168012.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:26.120849  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:26.134375  439257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:26.139751  439257 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:38 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:26.139813  439257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:26.147066  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:26.160117  439257 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:26.165519  439257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:27:26.173192  439257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:27:26.180581  439257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:27:26.188029  439257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:27:26.195997  439257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:27:26.205599  439257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:27:26.213791  439257 kubeadm.go:401] StartCluster: {Name:test-preload-934107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-934107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:26.213887  439257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:26.213945  439257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:26.254443  439257 cri.go:89] found id: ""
	I1115 10:27:26.254526  439257 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:26.267323  439257 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:27:26.267347  439257 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:27:26.267404  439257 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:27:26.279475  439257 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:27:26.280015  439257 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-934107" does not appear in /home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:27:26.280146  439257 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-412813/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-934107" cluster setting kubeconfig missing "test-preload-934107" context setting]
	I1115 10:27:26.280444  439257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/kubeconfig: {Name:mk18351328d03342e92a234b66dd855b67ad51ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:26.328120  439257 kapi.go:59] client config for test-preload-934107: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.key", CAFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:26.328567  439257 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:27:26.328585  439257 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:27:26.328589  439257 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:27:26.328593  439257 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:27:26.328597  439257 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:27:26.329032  439257 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:27:26.344429  439257 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.107
	I1115 10:27:26.344474  439257 kubeadm.go:1161] stopping kube-system containers ...
	I1115 10:27:26.344489  439257 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1115 10:27:26.344544  439257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:26.390562  439257 cri.go:89] found id: ""
	I1115 10:27:26.390635  439257 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1115 10:27:26.415567  439257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:27:26.427964  439257 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:27:26.427984  439257 kubeadm.go:158] found existing configuration files:
	
	I1115 10:27:26.428035  439257 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:27:26.439916  439257 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:27:26.439988  439257 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:27:26.451967  439257 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:27:26.462851  439257 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:27:26.462929  439257 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:27:26.474683  439257 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:27:26.485183  439257 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:27:26.485256  439257 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:27:26.496616  439257 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:27:26.507728  439257 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:27:26.507796  439257 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:27:26.520043  439257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:27:26.532575  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:27:26.590437  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:27:27.462362  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:27:27.712523  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:27:27.787171  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:27:27.854123  439257 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:27:27.854220  439257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:28.354688  439257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:28.854787  439257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:29.354438  439257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:29.389184  439257 api_server.go:72] duration metric: took 1.535072003s to wait for apiserver process to appear ...
	I1115 10:27:29.389226  439257 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:27:29.389252  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:31.311650  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:27:31.311694  439257 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:27:31.311711  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:31.389523  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:31.389558  439257 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:31.389577  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:31.394414  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:31.394449  439257 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:31.890217  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:31.908520  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:31.908556  439257 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:32.390289  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:32.398772  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:27:32.398817  439257 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:27:32.889713  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:32.895258  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I1115 10:27:32.901923  439257 api_server.go:141] control plane version: v1.32.0
	I1115 10:27:32.901954  439257 api_server.go:131] duration metric: took 3.512720539s to wait for apiserver health ...
	I1115 10:27:32.901965  439257 cni.go:84] Creating CNI manager for ""
	I1115 10:27:32.901972  439257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:27:32.903752  439257 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 10:27:32.905242  439257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 10:27:32.918065  439257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 10:27:32.940401  439257 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:27:32.946691  439257 system_pods.go:59] 7 kube-system pods found
	I1115 10:27:32.946760  439257 system_pods.go:61] "coredns-668d6bf9bc-pt7pj" [0827c1bb-612d-4ee1-ba28-42d9f6a40af0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:32.946774  439257 system_pods.go:61] "etcd-test-preload-934107" [1b2432a1-5b47-4a4c-900e-302aab559724] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:32.946793  439257 system_pods.go:61] "kube-apiserver-test-preload-934107" [82242337-4ad2-41e7-bd31-04102a5cdca0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:32.946808  439257 system_pods.go:61] "kube-controller-manager-test-preload-934107" [f5a20449-81e3-47ed-82e5-77ecf8251f0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:32.946823  439257 system_pods.go:61] "kube-proxy-89fqr" [de7fca30-1c9d-43ae-b0bf-9b75f09fe750] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:27:32.946836  439257 system_pods.go:61] "kube-scheduler-test-preload-934107" [0caf1e11-de6b-4cb1-a08f-b299da040a01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:32.946844  439257 system_pods.go:61] "storage-provisioner" [bfab650c-092b-4952-8c4f-66bb8eb60a69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:27:32.946860  439257 system_pods.go:74] duration metric: took 6.430682ms to wait for pod list to return data ...
	I1115 10:27:32.946875  439257 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:27:32.950800  439257 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 10:27:32.950839  439257 node_conditions.go:123] node cpu capacity is 2
	I1115 10:27:32.950858  439257 node_conditions.go:105] duration metric: took 3.972956ms to run NodePressure ...
	I1115 10:27:32.950939  439257 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:27:33.264074  439257 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1115 10:27:33.268373  439257 kubeadm.go:744] kubelet initialised
	I1115 10:27:33.268405  439257 kubeadm.go:745] duration metric: took 4.292574ms waiting for restarted kubelet to initialise ...
	I1115 10:27:33.268432  439257 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:27:33.285447  439257 ops.go:34] apiserver oom_adj: -16
	I1115 10:27:33.285483  439257 kubeadm.go:602] duration metric: took 7.018127372s to restartPrimaryControlPlane
	I1115 10:27:33.285498  439257 kubeadm.go:403] duration metric: took 7.071722736s to StartCluster
	I1115 10:27:33.285524  439257 settings.go:142] acquiring lock: {Name:mk51bbf0fd9b357d299ebd118e728450a954032c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:33.285616  439257 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:27:33.286300  439257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/kubeconfig: {Name:mk18351328d03342e92a234b66dd855b67ad51ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:33.286555  439257 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:33.286693  439257 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:27:33.286801  439257 addons.go:70] Setting storage-provisioner=true in profile "test-preload-934107"
	I1115 10:27:33.286826  439257 addons.go:239] Setting addon storage-provisioner=true in "test-preload-934107"
	W1115 10:27:33.286841  439257 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:27:33.286830  439257 addons.go:70] Setting default-storageclass=true in profile "test-preload-934107"
	I1115 10:27:33.286862  439257 config.go:182] Loaded profile config "test-preload-934107": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1115 10:27:33.286883  439257 host.go:66] Checking if "test-preload-934107" exists ...
	I1115 10:27:33.286867  439257 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-934107"
	I1115 10:27:33.288991  439257 out.go:179] * Verifying Kubernetes components...
	I1115 10:27:33.289372  439257 kapi.go:59] client config for test-preload-934107: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.key", CAFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:33.289694  439257 addons.go:239] Setting addon default-storageclass=true in "test-preload-934107"
	W1115 10:27:33.289715  439257 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:27:33.289738  439257 host.go:66] Checking if "test-preload-934107" exists ...
	I1115 10:27:33.290130  439257 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:27:33.290216  439257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:33.291428  439257 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:27:33.291445  439257 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:27:33.291617  439257 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:27:33.291634  439257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:27:33.294344  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:33.294505  439257 main.go:143] libmachine: domain test-preload-934107 has defined MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:33.294809  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:33.294839  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:33.295018  439257 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:4a:2f", ip: ""} in network mk-test-preload-934107: {Iface:virbr1 ExpiryTime:2025-11-15 11:27:16 +0000 UTC Type:0 Mac:52:54:00:9a:4a:2f Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-934107 Clientid:01:52:54:00:9a:4a:2f}
	I1115 10:27:33.295045  439257 main.go:143] libmachine: domain test-preload-934107 has defined IP address 192.168.39.107 and MAC address 52:54:00:9a:4a:2f in network mk-test-preload-934107
	I1115 10:27:33.295054  439257 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/id_rsa Username:docker}
	I1115 10:27:33.295321  439257 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/test-preload-934107/id_rsa Username:docker}
	I1115 10:27:33.548728  439257 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:33.573102  439257 node_ready.go:35] waiting up to 6m0s for node "test-preload-934107" to be "Ready" ...
	I1115 10:27:33.576217  439257 node_ready.go:49] node "test-preload-934107" is "Ready"
	I1115 10:27:33.576262  439257 node_ready.go:38] duration metric: took 3.088234ms for node "test-preload-934107" to be "Ready" ...
	I1115 10:27:33.576283  439257 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:27:33.576349  439257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:27:33.600954  439257 api_server.go:72] duration metric: took 314.364183ms to wait for apiserver process to appear ...
	I1115 10:27:33.600988  439257 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:27:33.601012  439257 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1115 10:27:33.608139  439257 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I1115 10:27:33.609238  439257 api_server.go:141] control plane version: v1.32.0
	I1115 10:27:33.609262  439257 api_server.go:131] duration metric: took 8.266704ms to wait for apiserver health ...
	I1115 10:27:33.609282  439257 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:27:33.613179  439257 system_pods.go:59] 7 kube-system pods found
	I1115 10:27:33.613205  439257 system_pods.go:61] "coredns-668d6bf9bc-pt7pj" [0827c1bb-612d-4ee1-ba28-42d9f6a40af0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:33.613212  439257 system_pods.go:61] "etcd-test-preload-934107" [1b2432a1-5b47-4a4c-900e-302aab559724] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:33.613220  439257 system_pods.go:61] "kube-apiserver-test-preload-934107" [82242337-4ad2-41e7-bd31-04102a5cdca0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:33.613229  439257 system_pods.go:61] "kube-controller-manager-test-preload-934107" [f5a20449-81e3-47ed-82e5-77ecf8251f0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:33.613235  439257 system_pods.go:61] "kube-proxy-89fqr" [de7fca30-1c9d-43ae-b0bf-9b75f09fe750] Running
	I1115 10:27:33.613254  439257 system_pods.go:61] "kube-scheduler-test-preload-934107" [0caf1e11-de6b-4cb1-a08f-b299da040a01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:33.613261  439257 system_pods.go:61] "storage-provisioner" [bfab650c-092b-4952-8c4f-66bb8eb60a69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:27:33.613268  439257 system_pods.go:74] duration metric: took 3.97583ms to wait for pod list to return data ...
	I1115 10:27:33.613277  439257 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:27:33.615539  439257 default_sa.go:45] found service account: "default"
	I1115 10:27:33.615559  439257 default_sa.go:55] duration metric: took 2.276371ms for default service account to be created ...
	I1115 10:27:33.615567  439257 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:27:33.617759  439257 system_pods.go:86] 7 kube-system pods found
	I1115 10:27:33.617783  439257 system_pods.go:89] "coredns-668d6bf9bc-pt7pj" [0827c1bb-612d-4ee1-ba28-42d9f6a40af0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:27:33.617791  439257 system_pods.go:89] "etcd-test-preload-934107" [1b2432a1-5b47-4a4c-900e-302aab559724] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:27:33.617798  439257 system_pods.go:89] "kube-apiserver-test-preload-934107" [82242337-4ad2-41e7-bd31-04102a5cdca0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:27:33.617803  439257 system_pods.go:89] "kube-controller-manager-test-preload-934107" [f5a20449-81e3-47ed-82e5-77ecf8251f0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:27:33.617808  439257 system_pods.go:89] "kube-proxy-89fqr" [de7fca30-1c9d-43ae-b0bf-9b75f09fe750] Running
	I1115 10:27:33.617817  439257 system_pods.go:89] "kube-scheduler-test-preload-934107" [0caf1e11-de6b-4cb1-a08f-b299da040a01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:27:33.617822  439257 system_pods.go:89] "storage-provisioner" [bfab650c-092b-4952-8c4f-66bb8eb60a69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:27:33.617829  439257 system_pods.go:126] duration metric: took 2.2577ms to wait for k8s-apps to be running ...
	I1115 10:27:33.617836  439257 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:27:33.617880  439257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:27:33.635020  439257 system_svc.go:56] duration metric: took 17.16801ms WaitForService to wait for kubelet
	I1115 10:27:33.635054  439257 kubeadm.go:587] duration metric: took 348.470518ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:27:33.635072  439257 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:27:33.637695  439257 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 10:27:33.637719  439257 node_conditions.go:123] node cpu capacity is 2
	I1115 10:27:33.637732  439257 node_conditions.go:105] duration metric: took 2.655557ms to run NodePressure ...
	I1115 10:27:33.637747  439257 start.go:242] waiting for startup goroutines ...
	I1115 10:27:33.666082  439257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:27:33.673137  439257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:27:34.363160  439257 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:27:34.364402  439257 addons.go:515] duration metric: took 1.077725063s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:27:34.364446  439257 start.go:247] waiting for cluster config update ...
	I1115 10:27:34.364463  439257 start.go:256] writing updated cluster config ...
	I1115 10:27:34.364800  439257 ssh_runner.go:195] Run: rm -f paused
	I1115 10:27:34.370211  439257 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:27:34.370921  439257 kapi.go:59] client config for test-preload-934107: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/test-preload-934107/client.key", CAFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:34.374521  439257 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-pt7pj" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:27:36.380651  439257 pod_ready.go:104] pod "coredns-668d6bf9bc-pt7pj" is not "Ready", error: <nil>
	I1115 10:27:37.380282  439257 pod_ready.go:94] pod "coredns-668d6bf9bc-pt7pj" is "Ready"
	I1115 10:27:37.380319  439257 pod_ready.go:86] duration metric: took 3.005775681s for pod "coredns-668d6bf9bc-pt7pj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:37.382732  439257 pod_ready.go:83] waiting for pod "etcd-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:27:39.388742  439257 pod_ready.go:104] pod "etcd-test-preload-934107" is not "Ready", error: <nil>
	W1115 10:27:41.389108  439257 pod_ready.go:104] pod "etcd-test-preload-934107" is not "Ready", error: <nil>
	W1115 10:27:43.889003  439257 pod_ready.go:104] pod "etcd-test-preload-934107" is not "Ready", error: <nil>
	I1115 10:27:45.888627  439257 pod_ready.go:94] pod "etcd-test-preload-934107" is "Ready"
	I1115 10:27:45.888656  439257 pod_ready.go:86] duration metric: took 8.505903023s for pod "etcd-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:45.890554  439257 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:45.896110  439257 pod_ready.go:94] pod "kube-apiserver-test-preload-934107" is "Ready"
	I1115 10:27:45.896134  439257 pod_ready.go:86] duration metric: took 5.556845ms for pod "kube-apiserver-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:45.899379  439257 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:47.910764  439257 pod_ready.go:94] pod "kube-controller-manager-test-preload-934107" is "Ready"
	I1115 10:27:47.910792  439257 pod_ready.go:86] duration metric: took 2.011392409s for pod "kube-controller-manager-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:47.913519  439257 pod_ready.go:83] waiting for pod "kube-proxy-89fqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:47.919705  439257 pod_ready.go:94] pod "kube-proxy-89fqr" is "Ready"
	I1115 10:27:47.919727  439257 pod_ready.go:86] duration metric: took 6.189231ms for pod "kube-proxy-89fqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:47.922892  439257 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:48.287867  439257 pod_ready.go:94] pod "kube-scheduler-test-preload-934107" is "Ready"
	I1115 10:27:48.287899  439257 pod_ready.go:86] duration metric: took 364.984314ms for pod "kube-scheduler-test-preload-934107" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:27:48.287911  439257 pod_ready.go:40] duration metric: took 13.917667153s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:27:48.333574  439257 start.go:628] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1115 10:27:48.336207  439257 out.go:203] 
	W1115 10:27:48.337546  439257 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1115 10:27:48.338766  439257 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:27:48.340041  439257 out.go:179] * Done! kubectl is now configured to use "test-preload-934107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.157240128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202469157215442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbdf540e-b4c2-43f6-873b-0d2e62d08bbf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.157804967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05bbcfee-4c89-4746-a94e-e0a4b193191d name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.157870458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05bbcfee-4c89-4746-a94e-e0a4b193191d name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.158065232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dab97f17ba5bb856fae47c1ca52fd654efd39380dfed9648d3ba317bb593e1a,PodSandboxId:0aa080375706dfc7ad67d8aa74520e6ab7815c1ad5b682d3b212fb34fd875d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763202455682350762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pt7pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0827c1bb-612d-4ee1-ba28-42d9f6a40af0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f564b117690ebc40bb2089f934c4f56f7be783475f8dcce0f098a5be357d7a8,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763202452957633227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bfab650c-092b-4952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1f671acc49c25795890d7540b789756b8964a3b4b9c106547cf710581d517,PodSandboxId:46f64964158cd00cf8bca6017b1d5ba663aae6a9d3277ff7616b1527c9bdc040,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763202452222323447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89fqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de
7fca30-1c9d-43ae-b0bf-9b75f09fe750,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286f1c5f4926bb132f3b7cee09b4737e3157423c02a0a8a3bd0e9785fd7f794,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763202452203456012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfab650c-092b-4
952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d18cc79110231479ce2824fc76363860d724bba97101499c1462dd6fa34e87,PodSandboxId:ef9ee03e594b493abad00d9c631cac132f1836d47ac76ecf012c8b9a3dd40ce8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763202448839077325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc3a1971a1b371e6fd798
3ce1cf2040,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c690a11d95a682159aa4f26436c67e56164b3a95143e7797d10be6693c3472,PodSandboxId:ddbff22b9314b1203a07bcd7689263dad986474a50b99618428276c4940eda80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763202448805632264,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6780d1f3533bb29163c620263b9643e,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0988b124ff60d22755a30e09476e45bb870a66b2a002018c1db25a8d82e8127c,PodSandboxId:f24b8b4b9ade6770df98cb31195da18ca746ed815b8548b3083f504008d53e99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763202448790936623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc484bf464c8326d888bd00a5406eb9,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f3e369163bda4e46c71c2cef28055aa54ebbf35db22f9320c1960055e2bee,PodSandboxId:880727d41687a46ca55f7f0cbe4e92be3e6282dc9b69321bd50a11da0dc4f0b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763202448777268089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a8ea4126548abf22ca56f5ec409b6d,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05bbcfee-4c89-4746-a94e-e0a4b193191d name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.198933062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6bf46ee-d19d-470d-9d3a-d9e17dba63ec name=/runtime.v1.RuntimeService/Version
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.199020086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6bf46ee-d19d-470d-9d3a-d9e17dba63ec name=/runtime.v1.RuntimeService/Version
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.200047635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cecdb9a5-de9e-48ed-b2ed-dc7442dbcf32 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.201009084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202469200979886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cecdb9a5-de9e-48ed-b2ed-dc7442dbcf32 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.201629631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e70ed97-9246-4674-9c58-5ef5f3d99535 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.201682961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e70ed97-9246-4674-9c58-5ef5f3d99535 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.201886897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dab97f17ba5bb856fae47c1ca52fd654efd39380dfed9648d3ba317bb593e1a,PodSandboxId:0aa080375706dfc7ad67d8aa74520e6ab7815c1ad5b682d3b212fb34fd875d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763202455682350762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pt7pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0827c1bb-612d-4ee1-ba28-42d9f6a40af0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f564b117690ebc40bb2089f934c4f56f7be783475f8dcce0f098a5be357d7a8,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763202452957633227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bfab650c-092b-4952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1f671acc49c25795890d7540b789756b8964a3b4b9c106547cf710581d517,PodSandboxId:46f64964158cd00cf8bca6017b1d5ba663aae6a9d3277ff7616b1527c9bdc040,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763202452222323447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89fqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de
7fca30-1c9d-43ae-b0bf-9b75f09fe750,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286f1c5f4926bb132f3b7cee09b4737e3157423c02a0a8a3bd0e9785fd7f794,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763202452203456012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfab650c-092b-4
952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d18cc79110231479ce2824fc76363860d724bba97101499c1462dd6fa34e87,PodSandboxId:ef9ee03e594b493abad00d9c631cac132f1836d47ac76ecf012c8b9a3dd40ce8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763202448839077325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc3a1971a1b371e6fd798
3ce1cf2040,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c690a11d95a682159aa4f26436c67e56164b3a95143e7797d10be6693c3472,PodSandboxId:ddbff22b9314b1203a07bcd7689263dad986474a50b99618428276c4940eda80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763202448805632264,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6780d1f3533bb29163c620263b9643e,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0988b124ff60d22755a30e09476e45bb870a66b2a002018c1db25a8d82e8127c,PodSandboxId:f24b8b4b9ade6770df98cb31195da18ca746ed815b8548b3083f504008d53e99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763202448790936623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc484bf464c8326d888bd00a5406eb9,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f3e369163bda4e46c71c2cef28055aa54ebbf35db22f9320c1960055e2bee,PodSandboxId:880727d41687a46ca55f7f0cbe4e92be3e6282dc9b69321bd50a11da0dc4f0b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763202448777268089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a8ea4126548abf22ca56f5ec409b6d,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e70ed97-9246-4674-9c58-5ef5f3d99535 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.240853993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12d90f89-9148-4d66-aed9-f096135bbb1b name=/runtime.v1.RuntimeService/Version
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.240928192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12d90f89-9148-4d66-aed9-f096135bbb1b name=/runtime.v1.RuntimeService/Version
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.242199082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b362ce1-6a0d-4821-b57d-4d713102074e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.242667304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202469242643265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b362ce1-6a0d-4821-b57d-4d713102074e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.243299712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c91e08bc-ba8e-40ec-b589-47df210c744b name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.243394302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c91e08bc-ba8e-40ec-b589-47df210c744b name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.243569494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dab97f17ba5bb856fae47c1ca52fd654efd39380dfed9648d3ba317bb593e1a,PodSandboxId:0aa080375706dfc7ad67d8aa74520e6ab7815c1ad5b682d3b212fb34fd875d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763202455682350762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pt7pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0827c1bb-612d-4ee1-ba28-42d9f6a40af0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f564b117690ebc40bb2089f934c4f56f7be783475f8dcce0f098a5be357d7a8,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763202452957633227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bfab650c-092b-4952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1f671acc49c25795890d7540b789756b8964a3b4b9c106547cf710581d517,PodSandboxId:46f64964158cd00cf8bca6017b1d5ba663aae6a9d3277ff7616b1527c9bdc040,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763202452222323447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89fqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de
7fca30-1c9d-43ae-b0bf-9b75f09fe750,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286f1c5f4926bb132f3b7cee09b4737e3157423c02a0a8a3bd0e9785fd7f794,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763202452203456012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfab650c-092b-4
952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d18cc79110231479ce2824fc76363860d724bba97101499c1462dd6fa34e87,PodSandboxId:ef9ee03e594b493abad00d9c631cac132f1836d47ac76ecf012c8b9a3dd40ce8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763202448839077325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc3a1971a1b371e6fd798
3ce1cf2040,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c690a11d95a682159aa4f26436c67e56164b3a95143e7797d10be6693c3472,PodSandboxId:ddbff22b9314b1203a07bcd7689263dad986474a50b99618428276c4940eda80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763202448805632264,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6780d1f3533bb29163c620263b9643e,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0988b124ff60d22755a30e09476e45bb870a66b2a002018c1db25a8d82e8127c,PodSandboxId:f24b8b4b9ade6770df98cb31195da18ca746ed815b8548b3083f504008d53e99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763202448790936623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc484bf464c8326d888bd00a5406eb9,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f3e369163bda4e46c71c2cef28055aa54ebbf35db22f9320c1960055e2bee,PodSandboxId:880727d41687a46ca55f7f0cbe4e92be3e6282dc9b69321bd50a11da0dc4f0b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763202448777268089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a8ea4126548abf22ca56f5ec409b6d,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c91e08bc-ba8e-40ec-b589-47df210c744b name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.279910866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81b9472f-568d-410d-bd4b-6aa219591f8c name=/runtime.v1.RuntimeService/Version
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.279980841Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81b9472f-568d-410d-bd4b-6aa219591f8c name=/runtime.v1.RuntimeService/Version
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.281596192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8c594fb-ab83-4984-a695-59fa963334b2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.282210540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202469282185084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8c594fb-ab83-4984-a695-59fa963334b2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.282831140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b23e2067-91a9-4255-8c79-a15c15702605 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.282884705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b23e2067-91a9-4255-8c79-a15c15702605 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:27:49 test-preload-934107 crio[838]: time="2025-11-15 10:27:49.283406964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dab97f17ba5bb856fae47c1ca52fd654efd39380dfed9648d3ba317bb593e1a,PodSandboxId:0aa080375706dfc7ad67d8aa74520e6ab7815c1ad5b682d3b212fb34fd875d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763202455682350762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pt7pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0827c1bb-612d-4ee1-ba28-42d9f6a40af0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f564b117690ebc40bb2089f934c4f56f7be783475f8dcce0f098a5be357d7a8,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763202452957633227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bfab650c-092b-4952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1f671acc49c25795890d7540b789756b8964a3b4b9c106547cf710581d517,PodSandboxId:46f64964158cd00cf8bca6017b1d5ba663aae6a9d3277ff7616b1527c9bdc040,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763202452222323447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89fqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de
7fca30-1c9d-43ae-b0bf-9b75f09fe750,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286f1c5f4926bb132f3b7cee09b4737e3157423c02a0a8a3bd0e9785fd7f794,PodSandboxId:4b6a8e7ab58195f34eeea8f0fb67dd7cd1999f84c22027e26d7f53aca62ed2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763202452203456012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfab650c-092b-4
952-8c4f-66bb8eb60a69,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d18cc79110231479ce2824fc76363860d724bba97101499c1462dd6fa34e87,PodSandboxId:ef9ee03e594b493abad00d9c631cac132f1836d47ac76ecf012c8b9a3dd40ce8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763202448839077325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc3a1971a1b371e6fd798
3ce1cf2040,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c690a11d95a682159aa4f26436c67e56164b3a95143e7797d10be6693c3472,PodSandboxId:ddbff22b9314b1203a07bcd7689263dad986474a50b99618428276c4940eda80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763202448805632264,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6780d1f3533bb29163c620263b9643e,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0988b124ff60d22755a30e09476e45bb870a66b2a002018c1db25a8d82e8127c,PodSandboxId:f24b8b4b9ade6770df98cb31195da18ca746ed815b8548b3083f504008d53e99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763202448790936623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc484bf464c8326d888bd00a5406eb9,},Annotations:map[string]string{io.kubern
etes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f3e369163bda4e46c71c2cef28055aa54ebbf35db22f9320c1960055e2bee,PodSandboxId:880727d41687a46ca55f7f0cbe4e92be3e6282dc9b69321bd50a11da0dc4f0b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763202448777268089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934107,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a8ea4126548abf22ca56f5ec409b6d,},Annotations:map[string]
string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b23e2067-91a9-4255-8c79-a15c15702605 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1dab97f17ba5b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   0aa080375706d       coredns-668d6bf9bc-pt7pj
	6f564b117690e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   4b6a8e7ab5819       storage-provisioner
	85b1f671acc49       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   46f64964158cd       kube-proxy-89fqr
	a286f1c5f4926       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Exited              storage-provisioner       1                   4b6a8e7ab5819       storage-provisioner
	46d18cc791102       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   ef9ee03e594b4       kube-scheduler-test-preload-934107
	f6c690a11d95a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   ddbff22b9314b       etcd-test-preload-934107
	0988b124ff60d       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   f24b8b4b9ade6       kube-apiserver-test-preload-934107
	e47f3e369163b       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   880727d41687a       kube-controller-manager-test-preload-934107
	
	
	==> coredns [1dab97f17ba5bb856fae47c1ca52fd654efd39380dfed9648d3ba317bb593e1a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44549 - 20798 "HINFO IN 4099689867626885537.8512726792218400967. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.139048993s
	
	
	==> describe nodes <==
	Name:               test-preload-934107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-934107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=test-preload-934107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_26_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:26:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-934107
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:27:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:27:33 +0000   Sat, 15 Nov 2025 10:26:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:27:33 +0000   Sat, 15 Nov 2025 10:26:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:27:33 +0000   Sat, 15 Nov 2025 10:26:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:27:33 +0000   Sat, 15 Nov 2025 10:27:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    test-preload-934107
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 948f8273ed7f4bd3a8b00dc23f15d493
	  System UUID:                948f8273-ed7f-4bd3-a8b0-0dc23f15d493
	  Boot ID:                    8614107f-7be8-4072-ac3a-7aa76667c737
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-pt7pj                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     72s
	  kube-system                 etcd-test-preload-934107                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         79s
	  kube-system                 kube-apiserver-test-preload-934107             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-test-preload-934107    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-89fqr                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-test-preload-934107             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 71s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node test-preload-934107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node test-preload-934107 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node test-preload-934107 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    77s                kubelet          Node test-preload-934107 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  77s                kubelet          Node test-preload-934107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     77s                kubelet          Node test-preload-934107 status is now: NodeHasSufficientPID
	  Normal   Starting                 77s                kubelet          Starting kubelet.
	  Normal   NodeReady                76s                kubelet          Node test-preload-934107 status is now: NodeReady
	  Normal   RegisteredNode           73s                node-controller  Node test-preload-934107 event: Registered Node test-preload-934107 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-934107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-934107 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-934107 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-934107 has been rebooted, boot id: 8614107f-7be8-4072-ac3a-7aa76667c737
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-934107 event: Registered Node test-preload-934107 in Controller
	
	
	==> dmesg <==
	[Nov15 10:27] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000040] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006507] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.970049] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.122832] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.094231] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.015980] kauditd_printk_skb: 246 callbacks suppressed
	[  +0.039159] kauditd_printk_skb: 149 callbacks suppressed
	
	
	==> etcd [f6c690a11d95a682159aa4f26436c67e56164b3a95143e7797d10be6693c3472] <==
	{"level":"info","ts":"2025-11-15T10:27:29.218605Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T10:27:29.213098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e switched to configuration voters=(17011807482017166174)"}
	{"level":"info","ts":"2025-11-15T10:27:29.222312Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","added-peer-id":"ec1614c5c0f7335e","added-peer-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2025-11-15T10:27:29.222427Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:27:29.222466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:27:29.224471Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T10:27:29.224555Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T10:27:29.224690Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2025-11-15T10:27:29.227790Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2025-11-15T10:27:30.181265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-15T10:27:30.181317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-15T10:27:30.181351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2025-11-15T10:27:30.181365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2025-11-15T10:27:30.181380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2025-11-15T10:27:30.181395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2025-11-15T10:27:30.181401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2025-11-15T10:27:30.182979Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:test-preload-934107 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T10:27:30.183020Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:27:30.183261Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:27:30.183298Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:27:30.183373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:27:30.184426Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:27:30.184439Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:27:30.185548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2025-11-15T10:27:30.185789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:27:49 up 0 min,  0 users,  load average: 1.33, 0.35, 0.12
	Linux test-preload-934107 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0988b124ff60d22755a30e09476e45bb870a66b2a002018c1db25a8d82e8127c] <==
	I1115 10:27:31.347590       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:27:31.347598       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:27:31.347674       1 shared_informer.go:320] Caches are synced for configmaps
	I1115 10:27:31.353861       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1115 10:27:31.353995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:27:31.354330       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:27:31.359035       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:27:31.359741       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1115 10:27:31.368609       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1115 10:27:31.368642       1 policy_source.go:240] refreshing policies
	I1115 10:27:31.383227       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1115 10:27:31.384030       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:27:31.384059       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:27:31.384065       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:27:31.384071       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:27:31.464730       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:27:31.860456       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1115 10:27:32.254941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:27:33.130110       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1115 10:27:33.174890       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1115 10:27:33.209699       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:27:33.217928       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:27:34.630616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1115 10:27:34.929802       1 controller.go:615] quota admission added evaluator for: endpoints
	I1115 10:27:34.980396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e47f3e369163bda4e46c71c2cef28055aa54ebbf35db22f9320c1960055e2bee] <==
	I1115 10:27:34.566872       1 shared_informer.go:320] Caches are synced for attach detach
	I1115 10:27:34.568061       1 shared_informer.go:320] Caches are synced for ephemeral
	I1115 10:27:34.571284       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1115 10:27:34.573835       1 shared_informer.go:320] Caches are synced for service account
	I1115 10:27:34.576339       1 shared_informer.go:320] Caches are synced for HPA
	I1115 10:27:34.576402       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1115 10:27:34.577544       1 shared_informer.go:320] Caches are synced for PVC protection
	I1115 10:27:34.577636       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1115 10:27:34.577668       1 shared_informer.go:320] Caches are synced for endpoint
	I1115 10:27:34.577550       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1115 10:27:34.579562       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1115 10:27:34.580716       1 shared_informer.go:320] Caches are synced for GC
	I1115 10:27:34.582974       1 shared_informer.go:320] Caches are synced for crt configmap
	I1115 10:27:34.584255       1 shared_informer.go:320] Caches are synced for resource quota
	I1115 10:27:34.588470       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1115 10:27:34.589643       1 shared_informer.go:320] Caches are synced for namespace
	I1115 10:27:34.590845       1 shared_informer.go:320] Caches are synced for daemon sets
	I1115 10:27:34.590875       1 shared_informer.go:320] Caches are synced for resource quota
	I1115 10:27:34.593203       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1115 10:27:34.604376       1 shared_informer.go:320] Caches are synced for garbage collector
	I1115 10:27:34.638486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="111.302187ms"
	I1115 10:27:34.639172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="135.911µs"
	I1115 10:27:35.983799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.371µs"
	I1115 10:27:37.019714       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.93033ms"
	I1115 10:27:37.019871       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.706µs"
	
	
	==> kube-proxy [85b1f671acc49c25795890d7540b789756b8964a3b4b9c106547cf710581d517] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1115 10:27:32.529032       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1115 10:27:32.540640       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E1115 10:27:32.540792       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:27:32.587106       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1115 10:27:32.587242       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1115 10:27:32.587292       1 server_linux.go:170] "Using iptables Proxier"
	I1115 10:27:32.589916       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:27:32.590259       1 server.go:497] "Version info" version="v1.32.0"
	I1115 10:27:32.590287       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:27:32.591689       1 config.go:199] "Starting service config controller"
	I1115 10:27:32.592101       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1115 10:27:32.592190       1 config.go:105] "Starting endpoint slice config controller"
	I1115 10:27:32.592196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1115 10:27:32.593580       1 config.go:329] "Starting node config controller"
	I1115 10:27:32.593626       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1115 10:27:32.692668       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1115 10:27:32.692711       1 shared_informer.go:320] Caches are synced for service config
	I1115 10:27:32.693696       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [46d18cc79110231479ce2824fc76363860d724bba97101499c1462dd6fa34e87] <==
	I1115 10:27:29.662641       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:27:31.310060       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:27:31.310098       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:27:31.310107       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:27:31.310168       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:27:31.371415       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1115 10:27:31.371453       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:27:31.373619       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:27:31.373707       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 10:27:31.375356       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1115 10:27:31.375433       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:27:31.473892       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: E1115 10:27:31.781958    1167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-pt7pj" podUID="0827c1bb-612d-4ee1-ba28-42d9f6a40af0"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.796522    1167 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.842694    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de7fca30-1c9d-43ae-b0bf-9b75f09fe750-xtables-lock\") pod \"kube-proxy-89fqr\" (UID: \"de7fca30-1c9d-43ae-b0bf-9b75f09fe750\") " pod="kube-system/kube-proxy-89fqr"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.842782    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bfab650c-092b-4952-8c4f-66bb8eb60a69-tmp\") pod \"storage-provisioner\" (UID: \"bfab650c-092b-4952-8c4f-66bb8eb60a69\") " pod="kube-system/storage-provisioner"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.842832    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de7fca30-1c9d-43ae-b0bf-9b75f09fe750-lib-modules\") pod \"kube-proxy-89fqr\" (UID: \"de7fca30-1c9d-43ae-b0bf-9b75f09fe750\") " pod="kube-system/kube-proxy-89fqr"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: E1115 10:27:31.843563    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: E1115 10:27:31.843652    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0827c1bb-612d-4ee1-ba28-42d9f6a40af0-config-volume podName:0827c1bb-612d-4ee1-ba28-42d9f6a40af0 nodeName:}" failed. No retries permitted until 2025-11-15 10:27:32.343632161 +0000 UTC m=+4.658547841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0827c1bb-612d-4ee1-ba28-42d9f6a40af0-config-volume") pod "coredns-668d6bf9bc-pt7pj" (UID: "0827c1bb-612d-4ee1-ba28-42d9f6a40af0") : object "kube-system"/"coredns" not registered
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.934031    1167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-934107"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.934367    1167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-934107"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: I1115 10:27:31.934689    1167 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-934107"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: E1115 10:27:31.952487    1167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-934107\" already exists" pod="kube-system/etcd-test-preload-934107"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: E1115 10:27:31.954476    1167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-934107\" already exists" pod="kube-system/kube-apiserver-test-preload-934107"
	Nov 15 10:27:31 test-preload-934107 kubelet[1167]: E1115 10:27:31.960282    1167 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-934107\" already exists" pod="kube-system/kube-scheduler-test-preload-934107"
	Nov 15 10:27:32 test-preload-934107 kubelet[1167]: E1115 10:27:32.345563    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 10:27:32 test-preload-934107 kubelet[1167]: E1115 10:27:32.345901    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0827c1bb-612d-4ee1-ba28-42d9f6a40af0-config-volume podName:0827c1bb-612d-4ee1-ba28-42d9f6a40af0 nodeName:}" failed. No retries permitted until 2025-11-15 10:27:33.34587175 +0000 UTC m=+5.660787429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0827c1bb-612d-4ee1-ba28-42d9f6a40af0-config-volume") pod "coredns-668d6bf9bc-pt7pj" (UID: "0827c1bb-612d-4ee1-ba28-42d9f6a40af0") : object "kube-system"/"coredns" not registered
	Nov 15 10:27:32 test-preload-934107 kubelet[1167]: E1115 10:27:32.854048    1167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-pt7pj" podUID="0827c1bb-612d-4ee1-ba28-42d9f6a40af0"
	Nov 15 10:27:32 test-preload-934107 kubelet[1167]: I1115 10:27:32.938441    1167 scope.go:117] "RemoveContainer" containerID="a286f1c5f4926bb132f3b7cee09b4737e3157423c02a0a8a3bd0e9785fd7f794"
	Nov 15 10:27:33 test-preload-934107 kubelet[1167]: E1115 10:27:33.352382    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 10:27:33 test-preload-934107 kubelet[1167]: E1115 10:27:33.352538    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0827c1bb-612d-4ee1-ba28-42d9f6a40af0-config-volume podName:0827c1bb-612d-4ee1-ba28-42d9f6a40af0 nodeName:}" failed. No retries permitted until 2025-11-15 10:27:35.352524558 +0000 UTC m=+7.667440229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0827c1bb-612d-4ee1-ba28-42d9f6a40af0-config-volume") pod "coredns-668d6bf9bc-pt7pj" (UID: "0827c1bb-612d-4ee1-ba28-42d9f6a40af0") : object "kube-system"/"coredns" not registered
	Nov 15 10:27:33 test-preload-934107 kubelet[1167]: I1115 10:27:33.508498    1167 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 15 10:27:36 test-preload-934107 kubelet[1167]: I1115 10:27:36.987750    1167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:27:37 test-preload-934107 kubelet[1167]: E1115 10:27:37.864056    1167 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202457863653292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 15 10:27:37 test-preload-934107 kubelet[1167]: E1115 10:27:37.864097    1167 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202457863653292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 15 10:27:47 test-preload-934107 kubelet[1167]: E1115 10:27:47.866371    1167 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202467865995884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 15 10:27:47 test-preload-934107 kubelet[1167]: E1115 10:27:47.866397    1167 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202467865995884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6f564b117690ebc40bb2089f934c4f56f7be783475f8dcce0f098a5be357d7a8] <==
	I1115 10:27:33.072019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:27:33.094232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:27:33.094308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [a286f1c5f4926bb132f3b7cee09b4737e3157423c02a0a8a3bd0e9785fd7f794] <==
	I1115 10:27:32.356705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:27:32.360521       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-934107 -n test-preload-934107
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-934107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-934107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-934107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-934107: (1.059698905s)
--- FAIL: TestPreload (126.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-485426 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-485426 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.325873643s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-485426] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-485426" primary control-plane node in "pause-485426" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-485426" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:38.672052  446484 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:38.672178  446484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:38.672187  446484 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:38.672191  446484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:38.672396  446484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:34:38.672830  446484 out.go:368] Setting JSON to false
	I1115 10:34:38.673756  446484 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8226,"bootTime":1763194653,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:34:38.673815  446484 start.go:143] virtualization: kvm guest
	I1115 10:34:38.675576  446484 out.go:179] * [pause-485426] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:34:38.676891  446484 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:34:38.676885  446484 notify.go:221] Checking for updates...
	I1115 10:34:38.680400  446484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:34:38.681717  446484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:34:38.683130  446484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 10:34:38.684271  446484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:34:38.685357  446484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:34:38.686911  446484 config.go:182] Loaded profile config "pause-485426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:38.687532  446484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:34:38.728954  446484 out.go:179] * Using the kvm2 driver based on existing profile
	I1115 10:34:38.730040  446484 start.go:309] selected driver: kvm2
	I1115 10:34:38.730062  446484 start.go:930] validating driver "kvm2" against &{Name:pause-485426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-485426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:38.730283  446484 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:34:38.731793  446484 cni.go:84] Creating CNI manager for ""
	I1115 10:34:38.731869  446484 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:34:38.731963  446484 start.go:353] cluster config:
	{Name:pause-485426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-485426 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:38.732147  446484 iso.go:125] acquiring lock: {Name:mke3d0b50f750b07aabde39a6bc9fa707eafad32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:34:38.733544  446484 out.go:179] * Starting "pause-485426" primary control-plane node in "pause-485426" cluster
	I1115 10:34:38.734775  446484 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:38.734823  446484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:34:38.734853  446484 cache.go:65] Caching tarball of preloaded images
	I1115 10:34:38.734984  446484 preload.go:238] Found /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:34:38.735005  446484 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:34:38.735184  446484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/config.json ...
	I1115 10:34:38.735533  446484 start.go:360] acquireMachinesLock for pause-485426: {Name:mk50d09d451dfb6834d3dcf4331d8b4da7231bd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 10:34:38.735612  446484 start.go:364] duration metric: took 47.311µs to acquireMachinesLock for "pause-485426"
	I1115 10:34:38.735639  446484 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:34:38.735655  446484 fix.go:54] fixHost starting: 
	I1115 10:34:38.738068  446484 fix.go:112] recreateIfNeeded on pause-485426: state=Running err=<nil>
	W1115 10:34:38.738108  446484 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:34:38.739491  446484 out.go:252] * Updating the running kvm2 "pause-485426" VM ...
	I1115 10:34:38.739519  446484 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:38.742337  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:38.742820  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:38.742855  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:38.743040  446484 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:38.743252  446484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1115 10:34:38.743263  446484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:38.856833  446484 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-485426
	
	I1115 10:34:38.856885  446484 buildroot.go:166] provisioning hostname "pause-485426"
	I1115 10:34:38.862529  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:38.863159  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:38.863196  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:38.863468  446484 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:38.863785  446484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1115 10:34:38.863797  446484 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-485426 && echo "pause-485426" | sudo tee /etc/hostname
	I1115 10:34:38.991581  446484 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-485426
	
	I1115 10:34:38.995652  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:38.996398  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:38.996448  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:38.996739  446484 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:38.997056  446484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1115 10:34:38.997081  446484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-485426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-485426/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-485426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:34:39.111834  446484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:34:39.111913  446484 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 10:34:39.111978  446484 buildroot.go:174] setting up certificates
	I1115 10:34:39.111990  446484 provision.go:84] configureAuth start
	I1115 10:34:39.115412  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.116061  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:39.116101  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.119064  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.119536  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:39.119562  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.119774  446484 provision.go:143] copyHostCerts
	I1115 10:34:39.119835  446484 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem, removing ...
	I1115 10:34:39.119851  446484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem
	I1115 10:34:39.119919  446484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 10:34:39.120060  446484 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem, removing ...
	I1115 10:34:39.120074  446484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem
	I1115 10:34:39.120110  446484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 10:34:39.120172  446484 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem, removing ...
	I1115 10:34:39.120179  446484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem
	I1115 10:34:39.120202  446484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 10:34:39.120254  446484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.pause-485426 san=[127.0.0.1 192.168.39.9 localhost minikube pause-485426]
	I1115 10:34:39.211384  446484 provision.go:177] copyRemoteCerts
	I1115 10:34:39.211464  446484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:34:39.214866  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.215308  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:39.215334  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.215480  446484 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/pause-485426/id_rsa Username:docker}
	I1115 10:34:39.305480  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:34:39.340935  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 10:34:39.376657  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:34:39.416135  446484 provision.go:87] duration metric: took 304.123227ms to configureAuth
	I1115 10:34:39.416171  446484 buildroot.go:189] setting minikube options for container-runtime
	I1115 10:34:39.416473  446484 config.go:182] Loaded profile config "pause-485426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:39.420390  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.421011  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:39.421062  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:39.421269  446484 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:39.421514  446484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1115 10:34:39.421537  446484 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:34:45.033554  446484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:34:45.033587  446484 machine.go:97] duration metric: took 6.294058092s to provisionDockerMachine
	I1115 10:34:45.033605  446484 start.go:293] postStartSetup for "pause-485426" (driver="kvm2")
	I1115 10:34:45.033619  446484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:34:45.033727  446484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:34:45.037645  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.038241  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:45.038284  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.038536  446484 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/pause-485426/id_rsa Username:docker}
	I1115 10:34:45.126249  446484 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:34:45.131616  446484 info.go:137] Remote host: Buildroot 2025.02
	I1115 10:34:45.131673  446484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/addons for local assets ...
	I1115 10:34:45.131743  446484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/files for local assets ...
	I1115 10:34:45.131839  446484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem -> 4168012.pem in /etc/ssl/certs
	I1115 10:34:45.132114  446484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:34:45.144147  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem --> /etc/ssl/certs/4168012.pem (1708 bytes)
	I1115 10:34:45.176353  446484 start.go:296] duration metric: took 142.728571ms for postStartSetup
	I1115 10:34:45.176414  446484 fix.go:56] duration metric: took 6.44076266s for fixHost
	I1115 10:34:45.179557  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.180027  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:45.180054  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.180206  446484 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:45.180412  446484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1115 10:34:45.180427  446484 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 10:34:45.282476  446484 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763202885.279684353
	
	I1115 10:34:45.282509  446484 fix.go:216] guest clock: 1763202885.279684353
	I1115 10:34:45.282518  446484 fix.go:229] Guest: 2025-11-15 10:34:45.279684353 +0000 UTC Remote: 2025-11-15 10:34:45.176420665 +0000 UTC m=+6.568253819 (delta=103.263688ms)
	I1115 10:34:45.282537  446484 fix.go:200] guest clock delta is within tolerance: 103.263688ms
	I1115 10:34:45.282542  446484 start.go:83] releasing machines lock for "pause-485426", held for 6.54691437s
	I1115 10:34:45.285672  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.286159  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:45.286188  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.286827  446484 ssh_runner.go:195] Run: cat /version.json
	I1115 10:34:45.286889  446484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:34:45.290043  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.290310  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.290535  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:45.290570  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.290792  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:45.290827  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:45.290789  446484 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/pause-485426/id_rsa Username:docker}
	I1115 10:34:45.291248  446484 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/pause-485426/id_rsa Username:docker}
	I1115 10:34:45.392788  446484 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:45.399474  446484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:34:45.573493  446484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:34:45.584344  446484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:34:45.584436  446484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:34:45.598086  446484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:34:45.598117  446484 start.go:496] detecting cgroup driver to use...
	I1115 10:34:45.598201  446484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:34:45.628746  446484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:34:45.658422  446484 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:34:45.658692  446484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:34:45.690539  446484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:34:45.722631  446484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:34:46.131030  446484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:34:46.530328  446484 docker.go:234] disabling docker service ...
	I1115 10:34:46.530402  446484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:34:46.596642  446484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:34:46.652000  446484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:34:47.050929  446484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:34:47.443801  446484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:34:47.492332  446484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:34:47.534012  446484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:34:47.534098  446484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.553127  446484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:34:47.553194  446484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.574681  446484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.616778  446484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.653210  446484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:34:47.677168  446484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.734305  446484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.777767  446484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:47.812862  446484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:34:47.837768  446484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:34:47.869159  446484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:48.197877  446484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:34:58.480868  446484 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.282932649s)
	I1115 10:34:58.480921  446484 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:34:58.480982  446484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:34:58.486781  446484 start.go:564] Will wait 60s for crictl version
	I1115 10:34:58.486846  446484 ssh_runner.go:195] Run: which crictl
	I1115 10:34:58.490797  446484 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 10:34:58.527047  446484 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 10:34:58.527132  446484 ssh_runner.go:195] Run: crio --version
	I1115 10:34:58.555355  446484 ssh_runner.go:195] Run: crio --version
	I1115 10:34:58.586183  446484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1115 10:34:58.590388  446484 main.go:143] libmachine: domain pause-485426 has defined MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:58.590804  446484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:b2:31", ip: ""} in network mk-pause-485426: {Iface:virbr1 ExpiryTime:2025-11-15 11:34:01 +0000 UTC Type:0 Mac:52:54:00:46:b2:31 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:pause-485426 Clientid:01:52:54:00:46:b2:31}
	I1115 10:34:58.590825  446484 main.go:143] libmachine: domain pause-485426 has defined IP address 192.168.39.9 and MAC address 52:54:00:46:b2:31 in network mk-pause-485426
	I1115 10:34:58.590996  446484 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1115 10:34:58.595818  446484 kubeadm.go:884] updating cluster {Name:pause-485426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-485426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia
-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:34:58.595946  446484 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:58.595990  446484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:34:58.639402  446484 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:34:58.639431  446484 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:34:58.639496  446484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:34:58.681382  446484 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:34:58.681424  446484 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:34:58.681438  446484 kubeadm.go:935] updating node { 192.168.39.9 8443 v1.34.1 crio true true} ...
	I1115 10:34:58.681579  446484 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-485426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-485426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:34:58.681740  446484 ssh_runner.go:195] Run: crio config
	I1115 10:34:58.735065  446484 cni.go:84] Creating CNI manager for ""
	I1115 10:34:58.735092  446484 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:34:58.735113  446484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:34:58.735134  446484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-485426 NodeName:pause-485426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:34:58.735268  446484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-485426"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:34:58.735356  446484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:34:58.748471  446484 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:34:58.748672  446484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:34:58.764688  446484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1115 10:34:58.788207  446484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:34:58.811640  446484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 10:34:58.833154  446484 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I1115 10:34:58.837482  446484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:59.026163  446484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:34:59.046588  446484 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426 for IP: 192.168.39.9
	I1115 10:34:59.046620  446484 certs.go:195] generating shared ca certs ...
	I1115 10:34:59.046643  446484 certs.go:227] acquiring lock for ca certs: {Name:mk02a14faa29b024d0296173a778127e8da9e7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:59.046839  446484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key
	I1115 10:34:59.046882  446484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key
	I1115 10:34:59.046892  446484 certs.go:257] generating profile certs ...
	I1115 10:34:59.046977  446484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/client.key
	I1115 10:34:59.047034  446484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/apiserver.key.be64fc5b
	I1115 10:34:59.047069  446484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/proxy-client.key
	I1115 10:34:59.047206  446484 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/416801.pem (1338 bytes)
	W1115 10:34:59.047249  446484 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-412813/.minikube/certs/416801_empty.pem, impossibly tiny 0 bytes
	I1115 10:34:59.047261  446484 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:34:59.047284  446484 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:34:59.047313  446484 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:34:59.047344  446484 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem (1675 bytes)
	I1115 10:34:59.047391  446484 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem (1708 bytes)
	I1115 10:34:59.048401  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:34:59.080535  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:34:59.111884  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:34:59.147157  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:34:59.176692  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:34:59.213634  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:34:59.249881  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:34:59.287351  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:34:59.327190  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:34:59.369354  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/416801.pem --> /usr/share/ca-certificates/416801.pem (1338 bytes)
	I1115 10:34:59.409995  446484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem --> /usr/share/ca-certificates/4168012.pem (1708 bytes)
	I1115 10:34:59.441989  446484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:34:59.463534  446484 ssh_runner.go:195] Run: openssl version
	I1115 10:34:59.470585  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4168012.pem && ln -fs /usr/share/ca-certificates/4168012.pem /etc/ssl/certs/4168012.pem"
	I1115 10:34:59.485369  446484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168012.pem
	I1115 10:34:59.490972  446484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:45 /usr/share/ca-certificates/4168012.pem
	I1115 10:34:59.491041  446484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168012.pem
	I1115 10:34:59.498367  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4168012.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:34:59.510638  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:34:59.523611  446484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:34:59.528713  446484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:38 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:34:59.528781  446484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:34:59.535827  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:34:59.547050  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/416801.pem && ln -fs /usr/share/ca-certificates/416801.pem /etc/ssl/certs/416801.pem"
	I1115 10:34:59.560302  446484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/416801.pem
	I1115 10:34:59.565604  446484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:45 /usr/share/ca-certificates/416801.pem
	I1115 10:34:59.565695  446484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/416801.pem
	I1115 10:34:59.572800  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/416801.pem /etc/ssl/certs/51391683.0"
	I1115 10:34:59.584457  446484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:34:59.590449  446484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:34:59.598387  446484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:34:59.605819  446484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:34:59.613483  446484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:34:59.620979  446484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:34:59.628682  446484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:34:59.636004  446484 kubeadm.go:401] StartCluster: {Name:pause-485426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-485426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:59.636139  446484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:59.636195  446484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:59.677051  446484 cri.go:89] found id: "072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614"
	I1115 10:34:59.677075  446484 cri.go:89] found id: "e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586"
	I1115 10:34:59.677080  446484 cri.go:89] found id: "e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa"
	I1115 10:34:59.677083  446484 cri.go:89] found id: "ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9"
	I1115 10:34:59.677087  446484 cri.go:89] found id: "a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a"
	I1115 10:34:59.677090  446484 cri.go:89] found id: "28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2"
	I1115 10:34:59.677093  446484 cri.go:89] found id: "9fa3449da6499c190f3bc7f5425768c04a7bc21b6bbd2991219f333c6c0533ea"
	I1115 10:34:59.677098  446484 cri.go:89] found id: "0b1044a0a7fae5a3008e3885fb38260cdbeb1cc8e803127bce53f9950e40025f"
	I1115 10:34:59.677102  446484 cri.go:89] found id: "6f92f344c1f1cdb2bb0636f03cf3fa78b31a73425c54ee567be263ab42a7870e"
	I1115 10:34:59.677109  446484 cri.go:89] found id: "f8bae831e22b36eb4f18705d4fff86c6be5d46e2d90011b3dd9798f603e57199"
	I1115 10:34:59.677111  446484 cri.go:89] found id: "8fcb3a40a5204c36154d90082fa00b76d650f650758f32b692b81131ecfd8236"
	I1115 10:34:59.677114  446484 cri.go:89] found id: "5c1fae3041f4d954f589a801d18a805b286dfe67bc9684227662e8510015da2a"
	I1115 10:34:59.677116  446484 cri.go:89] found id: ""
	I1115 10:34:59.677163  446484 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-485426 -n pause-485426
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-485426 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-485426 logs -n 25: (1.574920764s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ ssh     │ cert-options-636664 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                 │ cert-options-636664       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ ssh     │ -p cert-options-636664 -- sudo cat /etc/kubernetes/admin.conf                                                                                               │ cert-options-636664       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ delete  │ -p cert-options-636664                                                                                                                                      │ cert-options-636664       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ ssh     │ -p NoKubernetes-170129 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │                     │
	│ start   │ -p stopped-upgrade-814289 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-814289    │ jenkins │ v1.32.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ stop    │ -p NoKubernetes-170129                                                                                                                                      │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p NoKubernetes-170129 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ stop    │ -p kubernetes-upgrade-546745                                                                                                                                │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p NoKubernetes-170129 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p NoKubernetes-170129                                                                                                                                      │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p pause-485426 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-485426              │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ stop    │ stopped-upgrade-814289 stop                                                                                                                                 │ stopped-upgrade-814289    │ jenkins │ v1.32.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p stopped-upgrade-814289 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-814289    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p pause-485426 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-485426              │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ delete  │ -p kubernetes-upgrade-546745                                                                                                                                │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p guest-763099 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-763099              │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-814289 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-814289    │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ delete  │ -p stopped-upgrade-814289                                                                                                                                   │ stopped-upgrade-814289    │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p auto-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-765007               │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ start   │ -p cert-expiration-506364 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-506364    │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ start   │ -p kindnet-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-765007            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:04
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:04.867476  447023 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:04.867627  447023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:04.867637  447023 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:04.867643  447023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:04.867965  447023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:35:04.868581  447023 out.go:368] Setting JSON to false
	I1115 10:35:04.869929  447023 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8252,"bootTime":1763194653,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:35:04.870072  447023 start.go:143] virtualization: kvm guest
	I1115 10:35:04.872349  447023 out.go:179] * [kindnet-765007] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:35:04.874375  447023 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:35:04.874374  447023 notify.go:221] Checking for updates...
	I1115 10:35:04.875931  447023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:04.877373  447023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:35:04.878899  447023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 10:35:04.880271  447023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:35:04.881799  447023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:04.883702  447023 config.go:182] Loaded profile config "auto-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.883875  447023 config.go:182] Loaded profile config "cert-expiration-506364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.883978  447023 config.go:182] Loaded profile config "guest-763099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1115 10:35:04.884111  447023 config.go:182] Loaded profile config "pause-485426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.884217  447023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:04.928555  447023 out.go:179] * Using the kvm2 driver based on user configuration
	I1115 10:35:04.930026  447023 start.go:309] selected driver: kvm2
	I1115 10:35:04.930051  447023 start.go:930] validating driver "kvm2" against <nil>
	I1115 10:35:04.930070  447023 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:04.931341  447023 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:35:04.931685  447023 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:04.931724  447023 cni.go:84] Creating CNI manager for "kindnet"
	I1115 10:35:04.931733  447023 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:35:04.931791  447023 start.go:353] cluster config:
	{Name:kindnet-765007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-765007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:04.931919  447023 iso.go:125] acquiring lock: {Name:mke3d0b50f750b07aabde39a6bc9fa707eafad32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:04.934552  447023 out.go:179] * Starting "kindnet-765007" primary control-plane node in "kindnet-765007" cluster
	I1115 10:35:04.154838  446484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:04.213621  446484 api_server.go:72] duration metric: took 1.061580731s to wait for apiserver process to appear ...
	I1115 10:35:04.213655  446484 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:04.213703  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:04.214344  446484 api_server.go:269] stopped: https://192.168.39.9:8443/healthz: Get "https://192.168.39.9:8443/healthz": dial tcp 192.168.39.9:8443: connect: connection refused
	I1115 10:35:04.713849  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:07.363084  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:35:07.363118  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:35:07.363138  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:07.423347  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:35:07.423385  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:35:07.713756  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:07.722154  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:07.722187  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:08.214865  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:08.220007  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:08.220042  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:08.714012  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:08.729175  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1115 10:35:08.744633  446484 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:08.744690  446484 api_server.go:131] duration metric: took 4.531003412s to wait for apiserver health ...
	I1115 10:35:08.744705  446484 cni.go:84] Creating CNI manager for ""
	I1115 10:35:08.744714  446484 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:35:08.746504  446484 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 10:35:08.748655  446484 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 10:35:08.774655  446484 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 10:35:08.813763  446484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:08.829692  446484 system_pods.go:59] 6 kube-system pods found
	I1115 10:35:08.829739  446484 system_pods.go:61] "coredns-66bc5c9577-5zzjr" [1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:08.829753  446484 system_pods.go:61] "etcd-pause-485426" [8b5c081b-8732-4d7f-87c4-59c24d96de14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:08.829767  446484 system_pods.go:61] "kube-apiserver-pause-485426" [d005f276-2ce2-4c2c-9285-40b8fc1047bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:08.829778  446484 system_pods.go:61] "kube-controller-manager-pause-485426" [a4db21f8-7cae-40bc-a464-c7bfd1fa7610] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:08.829786  446484 system_pods.go:61] "kube-proxy-54x7t" [580dd749-55c2-4ae3-91db-623ae52c0bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:35:08.829794  446484 system_pods.go:61] "kube-scheduler-pause-485426" [d9718ae3-cc0e-443c-b3af-6c40bffa84bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:08.829807  446484 system_pods.go:74] duration metric: took 16.014485ms to wait for pod list to return data ...
	I1115 10:35:08.829819  446484 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:08.840207  446484 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 10:35:08.840251  446484 node_conditions.go:123] node cpu capacity is 2
	I1115 10:35:08.840274  446484 node_conditions.go:105] duration metric: took 10.444265ms to run NodePressure ...
	I1115 10:35:08.840342  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:35:09.166726  446484 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1115 10:35:09.170395  446484 kubeadm.go:744] kubelet initialised
	I1115 10:35:09.170428  446484 kubeadm.go:745] duration metric: took 3.67217ms waiting for restarted kubelet to initialise ...
	I1115 10:35:09.170452  446484 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:09.186134  446484 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:09.186167  446484 kubeadm.go:602] duration metric: took 9.43934177s to restartPrimaryControlPlane
	I1115 10:35:09.186182  446484 kubeadm.go:403] duration metric: took 9.550188034s to StartCluster
	I1115 10:35:09.186206  446484 settings.go:142] acquiring lock: {Name:mk51bbf0fd9b357d299ebd118e728450a954032c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:09.186308  446484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:35:09.187231  446484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/kubeconfig: {Name:mk18351328d03342e92a234b66dd855b67ad51ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:09.187530  446484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:09.187605  446484 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:09.187858  446484 config.go:182] Loaded profile config "pause-485426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:09.189363  446484 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:09.190256  446484 out.go:179] * Enabled addons: 
	I1115 10:35:04.947872  446784 main.go:143] libmachine: waiting for domain to start...
	I1115 10:35:04.949579  446784 main.go:143] libmachine: domain is now running
	I1115 10:35:04.949601  446784 main.go:143] libmachine: waiting for IP...
	I1115 10:35:04.950567  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:04.951404  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:04.951425  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:04.951857  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:04.951928  446784 retry.go:31] will retry after 204.12322ms: waiting for domain to come up
	I1115 10:35:05.157363  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:05.158259  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:05.158283  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:05.158705  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:05.158752  446784 retry.go:31] will retry after 247.632117ms: waiting for domain to come up
	I1115 10:35:05.408324  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:05.409091  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:05.409115  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:05.409541  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:05.409601  446784 retry.go:31] will retry after 440.981833ms: waiting for domain to come up
	I1115 10:35:05.852696  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:05.853506  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:05.853522  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:05.854046  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:05.854090  446784 retry.go:31] will retry after 382.523756ms: waiting for domain to come up
	I1115 10:35:06.238948  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:06.239737  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:06.239767  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:06.240105  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:06.240145  446784 retry.go:31] will retry after 576.427015ms: waiting for domain to come up
	I1115 10:35:06.818027  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:06.818813  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:06.818836  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:06.819242  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:06.819289  446784 retry.go:31] will retry after 861.71118ms: waiting for domain to come up
	I1115 10:35:07.682480  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:07.683399  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:07.683428  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:07.683832  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:07.683877  446784 retry.go:31] will retry after 1.063502672s: waiting for domain to come up
	I1115 10:35:08.749045  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:08.749717  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:08.749734  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:08.750056  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:08.750094  446784 retry.go:31] will retry after 1.248064704s: waiting for domain to come up
	I1115 10:35:04.935804  447023 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:04.935848  447023 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:35:04.935869  447023 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:04.935989  447023 preload.go:238] Found /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:35:04.936005  447023 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:04.936143  447023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/config.json ...
	I1115 10:35:04.936169  447023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/config.json: {Name:mk80c7c7043866a72e212241a0b10c76cd171e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:04.936353  447023 start.go:360] acquireMachinesLock for kindnet-765007: {Name:mk50d09d451dfb6834d3dcf4331d8b4da7231bd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 10:35:09.191143  446484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:09.191893  446484 addons.go:515] duration metric: took 4.292717ms for enable addons: enabled=[]
	I1115 10:35:09.402032  446484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:09.430567  446484 node_ready.go:35] waiting up to 6m0s for node "pause-485426" to be "Ready" ...
	I1115 10:35:09.433878  446484 node_ready.go:49] node "pause-485426" is "Ready"
	I1115 10:35:09.433930  446484 node_ready.go:38] duration metric: took 3.306323ms for node "pause-485426" to be "Ready" ...
	I1115 10:35:09.433953  446484 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:09.434022  446484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:09.455914  446484 api_server.go:72] duration metric: took 268.341896ms to wait for apiserver process to appear ...
	I1115 10:35:09.455948  446484 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:09.455976  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:09.462967  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1115 10:35:09.463965  446484 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:09.463991  446484 api_server.go:131] duration metric: took 8.033619ms to wait for apiserver health ...
	I1115 10:35:09.464002  446484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:09.467408  446484 system_pods.go:59] 6 kube-system pods found
	I1115 10:35:09.467454  446484 system_pods.go:61] "coredns-66bc5c9577-5zzjr" [1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:09.467467  446484 system_pods.go:61] "etcd-pause-485426" [8b5c081b-8732-4d7f-87c4-59c24d96de14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:09.467479  446484 system_pods.go:61] "kube-apiserver-pause-485426" [d005f276-2ce2-4c2c-9285-40b8fc1047bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:09.467492  446484 system_pods.go:61] "kube-controller-manager-pause-485426" [a4db21f8-7cae-40bc-a464-c7bfd1fa7610] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:09.467508  446484 system_pods.go:61] "kube-proxy-54x7t" [580dd749-55c2-4ae3-91db-623ae52c0bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:35:09.467547  446484 system_pods.go:61] "kube-scheduler-pause-485426" [d9718ae3-cc0e-443c-b3af-6c40bffa84bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:09.467564  446484 system_pods.go:74] duration metric: took 3.553322ms to wait for pod list to return data ...
	I1115 10:35:09.467579  446484 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:09.470190  446484 default_sa.go:45] found service account: "default"
	I1115 10:35:09.470217  446484 default_sa.go:55] duration metric: took 2.626387ms for default service account to be created ...
	I1115 10:35:09.470228  446484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:09.473196  446484 system_pods.go:86] 6 kube-system pods found
	I1115 10:35:09.473237  446484 system_pods.go:89] "coredns-66bc5c9577-5zzjr" [1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:09.473250  446484 system_pods.go:89] "etcd-pause-485426" [8b5c081b-8732-4d7f-87c4-59c24d96de14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:09.473260  446484 system_pods.go:89] "kube-apiserver-pause-485426" [d005f276-2ce2-4c2c-9285-40b8fc1047bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:09.473269  446484 system_pods.go:89] "kube-controller-manager-pause-485426" [a4db21f8-7cae-40bc-a464-c7bfd1fa7610] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:09.473278  446484 system_pods.go:89] "kube-proxy-54x7t" [580dd749-55c2-4ae3-91db-623ae52c0bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:35:09.473286  446484 system_pods.go:89] "kube-scheduler-pause-485426" [d9718ae3-cc0e-443c-b3af-6c40bffa84bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:09.473305  446484 system_pods.go:126] duration metric: took 3.068275ms to wait for k8s-apps to be running ...
	I1115 10:35:09.473318  446484 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:09.473386  446484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:09.502561  446484 system_svc.go:56] duration metric: took 29.228306ms WaitForService to wait for kubelet
	I1115 10:35:09.502597  446484 kubeadm.go:587] duration metric: took 315.034762ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:09.502625  446484 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:09.505333  446484 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 10:35:09.505371  446484 node_conditions.go:123] node cpu capacity is 2
	I1115 10:35:09.505391  446484 node_conditions.go:105] duration metric: took 2.759026ms to run NodePressure ...
	I1115 10:35:09.505411  446484 start.go:242] waiting for startup goroutines ...
	I1115 10:35:09.505426  446484 start.go:247] waiting for cluster config update ...
	I1115 10:35:09.505441  446484 start.go:256] writing updated cluster config ...
	I1115 10:35:09.505890  446484 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:09.513131  446484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:09.514071  446484 kapi.go:59] client config for pause-485426: &rest.Config{Host:"https://192.168.39.9:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/client.key", CAFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:35:09.516892  446484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5zzjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:11.025108  446484 pod_ready.go:94] pod "coredns-66bc5c9577-5zzjr" is "Ready"
	I1115 10:35:11.025156  446484 pod_ready.go:86] duration metric: took 1.508237033s for pod "coredns-66bc5c9577-5zzjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:11.028462  446484 pod_ready.go:83] waiting for pod "etcd-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:13.036070  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	I1115 10:35:09.999939  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:10.000871  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:10.000901  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:10.001396  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:10.001453  446784 retry.go:31] will retry after 1.398285842s: waiting for domain to come up
	I1115 10:35:11.402122  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:11.402819  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:11.402837  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:11.403174  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:11.403208  446784 retry.go:31] will retry after 1.520876771s: waiting for domain to come up
	I1115 10:35:12.926064  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:12.926871  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:12.926903  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:12.927407  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:12.927460  446784 retry.go:31] will retry after 2.096829655s: waiting for domain to come up
	W1115 10:35:15.534875  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	W1115 10:35:17.535312  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	I1115 10:35:15.026834  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:15.027831  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:15.027875  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:15.028425  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:15.028478  446784 retry.go:31] will retry after 2.635595032s: waiting for domain to come up
	I1115 10:35:17.665915  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:17.666480  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:17.666497  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:17.666828  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:17.666870  446784 retry.go:31] will retry after 2.822178743s: waiting for domain to come up
	I1115 10:35:21.993533  446878 start.go:364] duration metric: took 22.600643222s to acquireMachinesLock for "cert-expiration-506364"
	I1115 10:35:21.993596  446878 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:35:21.993603  446878 fix.go:54] fixHost starting: 
	I1115 10:35:21.996096  446878 fix.go:112] recreateIfNeeded on cert-expiration-506364: state=Running err=<nil>
	W1115 10:35:21.996119  446878 fix.go:138] unexpected machine state, will restart: <nil>
	W1115 10:35:20.034391  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	W1115 10:35:22.035234  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	I1115 10:35:20.491026  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.491723  446784 main.go:143] libmachine: domain auto-765007 has current primary IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.491745  446784 main.go:143] libmachine: found domain IP: 192.168.61.247
	I1115 10:35:20.491757  446784 main.go:143] libmachine: reserving static IP address...
	I1115 10:35:20.492104  446784 main.go:143] libmachine: unable to find host DHCP lease matching {name: "auto-765007", mac: "52:54:00:aa:35:52", ip: "192.168.61.247"} in network mk-auto-765007
	I1115 10:35:20.718216  446784 main.go:143] libmachine: reserved static IP address 192.168.61.247 for domain auto-765007
	I1115 10:35:20.718246  446784 main.go:143] libmachine: waiting for SSH...
	I1115 10:35:20.718255  446784 main.go:143] libmachine: Getting to WaitForSSH function...
	I1115 10:35:20.721163  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.721621  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:20.721673  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.721919  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:20.722233  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:20.722250  446784 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1115 10:35:20.831369  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:20.831841  446784 main.go:143] libmachine: domain creation complete
	I1115 10:35:20.833677  446784 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:20.836369  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.836887  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:20.836928  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.837233  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:20.837460  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:20.837470  446784 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:20.949322  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1115 10:35:20.949359  446784 buildroot.go:166] provisioning hostname "auto-765007"
	I1115 10:35:20.952459  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.952977  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:20.953017  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.953291  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:20.953499  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:20.953511  446784 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-765007 && echo "auto-765007" | sudo tee /etc/hostname
	I1115 10:35:21.083155  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-765007
	
	I1115 10:35:21.086811  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.087318  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.087351  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.087614  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:21.087898  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:21.087922  446784 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-765007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-765007/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-765007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:21.209018  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:21.209058  446784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 10:35:21.209078  446784 buildroot.go:174] setting up certificates
	I1115 10:35:21.209089  446784 provision.go:84] configureAuth start
	I1115 10:35:21.212056  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.212488  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.212514  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.214682  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.215126  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.215150  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.215292  446784 provision.go:143] copyHostCerts
	I1115 10:35:21.215363  446784 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem, removing ...
	I1115 10:35:21.215381  446784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem
	I1115 10:35:21.215445  446784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 10:35:21.215559  446784 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem, removing ...
	I1115 10:35:21.215572  446784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem
	I1115 10:35:21.215616  446784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 10:35:21.215731  446784 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem, removing ...
	I1115 10:35:21.215743  446784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem
	I1115 10:35:21.215769  446784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 10:35:21.215840  446784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.auto-765007 san=[127.0.0.1 192.168.61.247 auto-765007 localhost minikube]
	I1115 10:35:21.297171  446784 provision.go:177] copyRemoteCerts
	I1115 10:35:21.297249  446784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:21.299913  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.300426  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.300473  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.300722  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:21.388651  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:21.421045  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 10:35:21.452284  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:21.484782  446784 provision.go:87] duration metric: took 275.673851ms to configureAuth
	I1115 10:35:21.484826  446784 buildroot.go:189] setting minikube options for container-runtime
	I1115 10:35:21.485079  446784 config.go:182] Loaded profile config "auto-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:21.488464  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.488941  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.488979  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.489195  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:21.489409  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:21.489431  446784 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:21.733790  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:21.733825  446784 machine.go:97] duration metric: took 900.127474ms to provisionDockerMachine
	I1115 10:35:21.733841  446784 client.go:176] duration metric: took 18.840528393s to LocalClient.Create
	I1115 10:35:21.733864  446784 start.go:167] duration metric: took 18.840601444s to libmachine.API.Create "auto-765007"
	I1115 10:35:21.733885  446784 start.go:293] postStartSetup for "auto-765007" (driver="kvm2")
	I1115 10:35:21.733911  446784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:21.734006  446784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:21.736865  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.737305  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.737332  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.737479  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:21.824501  446784 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:21.829432  446784 info.go:137] Remote host: Buildroot 2025.02
	I1115 10:35:21.829464  446784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/addons for local assets ...
	I1115 10:35:21.829549  446784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/files for local assets ...
	I1115 10:35:21.829644  446784 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem -> 4168012.pem in /etc/ssl/certs
	I1115 10:35:21.829786  446784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:21.843508  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem --> /etc/ssl/certs/4168012.pem (1708 bytes)
	I1115 10:35:21.874047  446784 start.go:296] duration metric: took 140.141188ms for postStartSetup
	I1115 10:35:21.877614  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.878065  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.878094  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.878302  446784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/config.json ...
	I1115 10:35:21.878488  446784 start.go:128] duration metric: took 18.987728003s to createHost
	I1115 10:35:21.880739  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.881158  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.881180  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.881329  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:21.881506  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:21.881514  446784 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 10:35:21.993338  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763202921.955406441
	
	I1115 10:35:21.993365  446784 fix.go:216] guest clock: 1763202921.955406441
	I1115 10:35:21.993375  446784 fix.go:229] Guest: 2025-11-15 10:35:21.955406441 +0000 UTC Remote: 2025-11-15 10:35:21.878499848 +0000 UTC m=+32.122616878 (delta=76.906593ms)
	I1115 10:35:21.993400  446784 fix.go:200] guest clock delta is within tolerance: 76.906593ms
	I1115 10:35:21.993407  446784 start.go:83] releasing machines lock for "auto-765007", held for 19.10288731s
	I1115 10:35:21.997038  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.997486  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.997516  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.998088  446784 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:21.998151  446784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:22.002101  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002111  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002636  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:22.002675  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:22.002682  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002708  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002902  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:22.003186  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:22.089064  446784 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:22.115266  446784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:22.283472  446784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:22.291315  446784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:22.291395  446784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:22.312644  446784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:35:22.312687  446784 start.go:496] detecting cgroup driver to use...
	I1115 10:35:22.312758  446784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:22.334009  446784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:22.352971  446784 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:22.353037  446784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:22.372004  446784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:22.392235  446784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:22.559843  446784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:22.768836  446784 docker.go:234] disabling docker service ...
	I1115 10:35:22.768905  446784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:22.789236  446784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:22.805697  446784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:22.990252  446784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:23.163022  446784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:23.178985  446784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:23.202017  446784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:23.202078  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.215395  446784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:23.215488  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.231191  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.243531  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.255861  446784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:23.268575  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.280799  446784 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.300974  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.313744  446784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:23.324132  446784 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 10:35:23.324195  446784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 10:35:23.344585  446784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:23.356233  446784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:23.494556  446784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:23.600869  446784 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:23.600987  446784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:23.606080  446784 start.go:564] Will wait 60s for crictl version
	I1115 10:35:23.606146  446784 ssh_runner.go:195] Run: which crictl
	I1115 10:35:23.610129  446784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 10:35:23.649185  446784 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 10:35:23.649277  446784 ssh_runner.go:195] Run: crio --version
	I1115 10:35:23.681373  446784 ssh_runner.go:195] Run: crio --version
	I1115 10:35:23.715049  446784 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1115 10:35:21.998417  446878 out.go:252] * Updating the running kvm2 "cert-expiration-506364" VM ...
	I1115 10:35:21.998440  446878 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:22.002293  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.002771  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.002815  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.003386  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.003649  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.003656  446878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:22.119758  446878 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-506364
	
	I1115 10:35:22.119797  446878 buildroot.go:166] provisioning hostname "cert-expiration-506364"
	I1115 10:35:22.122999  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.123487  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.123520  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.123728  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.124026  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.124038  446878 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-506364 && echo "cert-expiration-506364" | sudo tee /etc/hostname
	I1115 10:35:22.260001  446878 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-506364
	
	I1115 10:35:22.263187  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.263587  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.263606  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.263775  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.263992  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.264001  446878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-506364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-506364/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-506364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:22.386050  446878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:22.386069  446878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 10:35:22.386087  446878 buildroot.go:174] setting up certificates
	I1115 10:35:22.386095  446878 provision.go:84] configureAuth start
	I1115 10:35:22.389773  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.390236  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.390256  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.393522  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.393971  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.393990  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.394218  446878 provision.go:143] copyHostCerts
	I1115 10:35:22.394302  446878 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem, removing ...
	I1115 10:35:22.394319  446878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem
	I1115 10:35:22.394393  446878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 10:35:22.394568  446878 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem, removing ...
	I1115 10:35:22.394577  446878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem
	I1115 10:35:22.394636  446878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 10:35:22.394846  446878 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem, removing ...
	I1115 10:35:22.394854  446878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem
	I1115 10:35:22.394892  446878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 10:35:22.394985  446878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-506364 san=[127.0.0.1 192.168.50.33 cert-expiration-506364 localhost minikube]
	I1115 10:35:22.762541  446878 provision.go:177] copyRemoteCerts
	I1115 10:35:22.762604  446878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:22.765379  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.765776  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.765799  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.765960  446878 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/cert-expiration-506364/id_rsa Username:docker}
	I1115 10:35:22.864091  446878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:22.897868  446878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:35:22.935039  446878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:35:22.972305  446878 provision.go:87] duration metric: took 586.191143ms to configureAuth
	I1115 10:35:22.972332  446878 buildroot.go:189] setting minikube options for container-runtime
	I1115 10:35:22.972546  446878 config.go:182] Loaded profile config "cert-expiration-506364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:22.976130  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.976635  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.976656  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.976860  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.977088  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.977097  446878 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:23.719297  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:23.719740  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:23.719770  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:23.720038  446784 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:23.724993  446784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:23.739932  446784 kubeadm.go:884] updating cluster {Name:auto-765007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-765007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:23.740089  446784 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:23.740154  446784 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:23.774309  446784 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 10:35:23.774384  446784 ssh_runner.go:195] Run: which lz4
	I1115 10:35:23.778780  446784 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1115 10:35:23.783398  446784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1115 10:35:23.783439  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1115 10:35:24.035621  446484 pod_ready.go:94] pod "etcd-pause-485426" is "Ready"
	I1115 10:35:24.035684  446484 pod_ready.go:86] duration metric: took 13.007193766s for pod "etcd-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.039427  446484 pod_ready.go:83] waiting for pod "kube-apiserver-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.046743  446484 pod_ready.go:94] pod "kube-apiserver-pause-485426" is "Ready"
	I1115 10:35:24.046784  446484 pod_ready.go:86] duration metric: took 7.324303ms for pod "kube-apiserver-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.054471  446484 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.064018  446484 pod_ready.go:94] pod "kube-controller-manager-pause-485426" is "Ready"
	I1115 10:35:24.064057  446484 pod_ready.go:86] duration metric: took 9.5451ms for pod "kube-controller-manager-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.068631  446484 pod_ready.go:83] waiting for pod "kube-proxy-54x7t" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.233075  446484 pod_ready.go:94] pod "kube-proxy-54x7t" is "Ready"
	I1115 10:35:24.233118  446484 pod_ready.go:86] duration metric: took 164.429491ms for pod "kube-proxy-54x7t" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.433253  446484 pod_ready.go:83] waiting for pod "kube-scheduler-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.834003  446484 pod_ready.go:94] pod "kube-scheduler-pause-485426" is "Ready"
	I1115 10:35:24.834037  446484 pod_ready.go:86] duration metric: took 400.747769ms for pod "kube-scheduler-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.834054  446484 pod_ready.go:40] duration metric: took 15.320890226s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:24.901924  446484 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:24.904339  446484 out.go:179] * Done! kubectl is now configured to use "pause-485426" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.680758715Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5zzjr,Uid:1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763202908530755421,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-15T10:35:08.050675275Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&PodSandboxMetadata{Name:kube-proxy-54x7t,Uid:580dd749-55c2-4ae3-91db-623ae52c0bb4,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1763202908397335487,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-15T10:35:08.050685443Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-485426,Uid:1cd57b179067198663917e132bf01ec1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763202903768246425,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 1cd57b179067198663917e132bf01ec1,kubernetes.io/config.seen: 2025-11-15T10:35:03.064434031Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-485426,Uid:8f5b1e7356be0998212573bd481c46e9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763202903764745155,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.9:8443,kubernetes.io/config.hash: 8f5b1e7356be0998212573bd481c46e9,kubernetes.io/config.seen: 2025-11-15T10:35:03.064432652Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-485426,Uid:10bf336465658494145b970b817f83aa,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763202903734059272,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10bf336465658494145b970b817f83aa,kubernetes.io/config.seen: 2025-11-15T10:35:03.064434986Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&PodSandboxMetadata{Name:etcd-pause-485426,Uid:84dae304c504fc5effd23cf9ebb9daa7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763202903728843215,Labels:map[string]string{component: etcd,io.kubernetes.containe
r.name: POD,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.9:2379,kubernetes.io/config.hash: 84dae304c504fc5effd23cf9ebb9daa7,kubernetes.io/config.seen: 2025-11-15T10:35:03.064427587Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ce42b78902496eec100d1819eea1cefddecb9c023bb073cae1cce3e0ba4055d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5zzjr,Uid:1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1763202885949511438,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/con
fig.seen: 2025-11-15T10:34:27.865448984Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&PodSandboxMetadata{Name:kube-proxy-54x7t,Uid:580dd749-55c2-4ae3-91db-623ae52c0bb4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1763202885635983401,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-15T10:34:27.380183613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-485426,Uid:10bf336465658494145b970b817f83aa,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,Created
At:1763202885632981072,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10bf336465658494145b970b817f83aa,kubernetes.io/config.seen: 2025-11-15T10:34:22.197488070Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-485426,Uid:1cd57b179067198663917e132bf01ec1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1763202885629487720,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,tier: control-pl
ane,},Annotations:map[string]string{kubernetes.io/config.hash: 1cd57b179067198663917e132bf01ec1,kubernetes.io/config.seen: 2025-11-15T10:34:22.197487148Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&PodSandboxMetadata{Name:etcd-pause-485426,Uid:84dae304c504fc5effd23cf9ebb9daa7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1763202885600874516,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.9:2379,kubernetes.io/config.hash: 84dae304c504fc5effd23cf9ebb9daa7,kubernetes.io/config.seen: 2025-11-15T10:34:22.197489021Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44371065afe41fb0028c2817da45149
617d059cef9da751aa3aab83fb7d95b4e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-485426,Uid:8f5b1e7356be0998212573bd481c46e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1763202885582890136,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.9:8443,kubernetes.io/config.hash: 8f5b1e7356be0998212573bd481c46e9,kubernetes.io/config.seen: 2025-11-15T10:34:22.197483198Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3cd6943afb30c3de67bc0a0b941d98f3c9199e911b32051b3bad28615e4a6ab9,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-8wlgm,Uid:705e1861-e8c7-4176-8116-1e99c1819434,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1763202868157157
156,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-8wlgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 705e1861-e8c7-4176-8116-1e99c1819434,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-15T10:34:27.832280135Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=079158a7-e160-48fb-90f9-c36e534fe5fc name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.682900169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65a3d0a1-5fd7-4b1b-84da-9ed00eb473b4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.683295431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65a3d0a1-5fd7-4b1b-84da-9ed00eb473b4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.683663999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65a3d0a1-5fd7-4b1b-84da-9ed00eb473b4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.701560565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4001d72-2ba6-40a9-b52c-a409fabf3613 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.701694791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4001d72-2ba6-40a9-b52c-a409fabf3613 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.704433705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2364d5f5-c498-45e3-a507-577edfde521c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.705144057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202925705105431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2364d5f5-c498-45e3-a507-577edfde521c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.706478385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5496032-0fad-49af-82a8-a9849a84979e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.706597032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5496032-0fad-49af-82a8-a9849a84979e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.707323251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5496032-0fad-49af-82a8-a9849a84979e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.765485691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1da2892-1566-4dc2-b93b-785afe59add9 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.765614167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1da2892-1566-4dc2-b93b-785afe59add9 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.767633852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36d22d93-0702-49db-ae81-228142450793 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.768655300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202925768617905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36d22d93-0702-49db-ae81-228142450793 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.769451819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e4061a9-bb8a-4b5d-a954-fa91ebecd76e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.769577692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e4061a9-bb8a-4b5d-a954-fa91ebecd76e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.769930146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e4061a9-bb8a-4b5d-a954-fa91ebecd76e name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.832424446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51875aca-f682-4849-8b2f-e977755f1ea8 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.832536630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51875aca-f682-4849-8b2f-e977755f1ea8 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.835522615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f11a0eb2-a49b-4ef4-8494-716be731f626 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.836034139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202925836004501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f11a0eb2-a49b-4ef4-8494-716be731f626 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.836732539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f22196c-c68e-41b4-971a-8d3080828155 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.836788585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f22196c-c68e-41b4-971a-8d3080828155 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:25 pause-485426 crio[3324]: time="2025-11-15 10:35:25.837438596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f22196c-c68e-41b4-971a-8d3080828155 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	48e5fa00a0f40       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   2                   1c348243b6460       coredns-66bc5c9577-5zzjr
	fa458fc484f9b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   17 seconds ago      Running             kube-proxy                2                   0ded225e6bd02       kube-proxy-54x7t
	f833090b86dd6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   21 seconds ago      Running             kube-scheduler            2                   3f30fd7462ca9       kube-scheduler-pause-485426
	7dffb899e3c94       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   21 seconds ago      Running             etcd                      2                   9775ab565f614       etcd-pause-485426
	206a71517720c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago      Running             kube-controller-manager   2                   3324833e652d5       kube-controller-manager-pause-485426
	203bc1a24c28d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   21 seconds ago      Running             kube-apiserver            2                   29da08b3b79e8       kube-apiserver-pause-485426
	072cab25500ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   38 seconds ago      Exited              coredns                   1                   0ce42b7890249       coredns-66bc5c9577-5zzjr
	e51cba8382c59       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   39 seconds ago      Exited              kube-proxy                1                   7b5f96b40a72d       kube-proxy-54x7t
	e047e4281b937       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   39 seconds ago      Exited              etcd                      1                   b3cf3a3dbb14a       etcd-pause-485426
	ebd7d2679e0bd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   39 seconds ago      Exited              kube-scheduler            1                   396d05d411c53       kube-scheduler-pause-485426
	a97c92af3b09c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   39 seconds ago      Exited              kube-controller-manager   1                   2dede2ee6e6de       kube-controller-manager-pause-485426
	28f9ab2457f8c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   39 seconds ago      Exited              kube-apiserver            1                   44371065afe41       kube-apiserver-pause-485426
	
	
	==> coredns [072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:32888 - 1397 "HINFO IN 7800171720486180570.6168994105347819983. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111726383s
	
	
	==> coredns [48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54481 - 44401 "HINFO IN 6932698302414610845.45722158611507688. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.109579926s
	
	
	==> describe nodes <==
	Name:               pause-485426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-485426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=pause-485426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-485426
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    pause-485426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 57de329d51b54dab841010c0d66eb064
	  System UUID:                57de329d-51b5-4dab-8410-10c0d66eb064
	  Boot ID:                    cc932c67-7aae-4cc8-8638-3edb2c6319e0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5zzjr                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     59s
	  kube-system                 etcd-pause-485426                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         64s
	  kube-system                 kube-apiserver-pause-485426             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-pause-485426    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-54x7t                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-pause-485426             100m (5%)     0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node pause-485426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node pause-485426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node pause-485426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    64s                kubelet          Node pause-485426 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  64s                kubelet          Node pause-485426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     64s                kubelet          Node pause-485426 status is now: NodeHasSufficientPID
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeReady                63s                kubelet          Node pause-485426 status is now: NodeReady
	  Normal  RegisteredNode           60s                node-controller  Node pause-485426 event: Registered Node pause-485426 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-485426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-485426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-485426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-485426 event: Registered Node pause-485426 in Controller
	
	
	==> dmesg <==
	[Nov15 10:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001477] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000307] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Nov15 10:34] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.098470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.106476] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.151967] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.020480] kauditd_printk_skb: 18 callbacks suppressed
	[  +3.414057] kauditd_printk_skb: 219 callbacks suppressed
	[  +5.983517] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.021107] kauditd_printk_skb: 275 callbacks suppressed
	[Nov15 10:35] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.577715] kauditd_printk_skb: 122 callbacks suppressed
	
	
	==> etcd [7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8] <==
	{"level":"warn","ts":"2025-11-15T10:35:06.157759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.195734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.233852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.251769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.269724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.284370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.302376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.320494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.335165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.349633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.361936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.387392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.399261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.410114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.422798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.441584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.454994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.470290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.484589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.504153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.531789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.542350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.567419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.575777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.664483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	
	
	==> etcd [e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa] <==
	{"level":"info","ts":"2025-11-15T10:34:48.151354Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-11-15T10:34:48.182421Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-15T10:34:48.184068Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:34:48.207750Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.9:2379"}
	{"level":"info","ts":"2025-11-15T10:34:48.223081Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:34:48.231281Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-485426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	{"level":"info","ts":"2025-11-15T10:34:48.243324Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-15T10:34:48.259926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:37604: use of closed network connection"}
	2025/11/15 10:34:48 WARNING: [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"error","ts":"2025-11-15T10:34:48.267026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:34:48.268704Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:34:48.270139Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:34:48.270308Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e6c05fccff8d5b5b","current-leader-member-id":"e6c05fccff8d5b5b"}
	{"level":"info","ts":"2025-11-15T10:34:48.270709Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:34:48.270809Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271353Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271415Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:34:48.271431Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271491Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271529Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:34:48.271543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.9:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:34:48.275142Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"error","ts":"2025-11-15T10:34:48.275280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.9:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:34:48.275317Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2025-11-15T10:34:48.275352Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-485426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	
	
	==> kernel <==
	 10:35:26 up 1 min,  0 users,  load average: 2.15, 0.74, 0.26
	Linux pause-485426 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819] <==
	I1115 10:35:07.457695       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:35:07.462855       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:35:07.472775       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:35:07.472861       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:35:07.472880       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:35:07.472896       1 cache.go:39] Caches are synced for autoregister controller
	E1115 10:35:07.487469       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:35:07.489688       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:35:07.499110       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:07.502717       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:35:07.502833       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:35:07.502969       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:35:07.507098       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:35:07.507147       1 policy_source.go:240] refreshing policies
	I1115 10:35:07.536333       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:07.560706       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:08.148040       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:08.294780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:09.022746       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:09.112864       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:35:09.149945       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:09.157092       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:11.004465       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:11.154367       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:35:11.204384       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2] <==
	I1115 10:34:46.884955       1 options.go:263] external host was not specified, using 192.168.39.9
	I1115 10:34:46.899852       1 server.go:150] Version: v1.34.1
	I1115 10:34:46.901321       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c] <==
	I1115 10:35:10.825517       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:35:10.829000       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:10.829299       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:10.831481       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:35:10.832336       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:35:10.835445       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:35:10.836791       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:10.837976       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:35:10.839237       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:35:10.840513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:35:10.840570       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:35:10.841748       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:35:10.849505       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:35:10.850693       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:35:10.850750       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:10.850864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:10.850887       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:35:10.850903       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:35:10.850857       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:35:10.851080       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:35:10.851118       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:35:10.857667       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:10.874362       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:35:10.874838       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:35:10.887325       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-controller-manager [a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a] <==
	
	
	==> kube-proxy [e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586] <==
	
	
	==> kube-proxy [fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d] <==
	I1115 10:35:08.922668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:09.023673       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:09.023724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.9"]
	E1115 10:35:09.023836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:09.084337       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1115 10:35:09.084491       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1115 10:35:09.084528       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:09.104752       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:09.105426       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:09.105452       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:09.115930       1 config.go:200] "Starting service config controller"
	I1115 10:35:09.118724       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:09.116709       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:09.122334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:09.116659       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:09.122377       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:09.119408       1 config.go:309] "Starting node config controller"
	I1115 10:35:09.122406       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:09.122420       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:09.220857       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:35:09.223193       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:35:09.223463       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9] <==
	
	
	==> kube-scheduler [f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3] <==
	I1115 10:35:05.357529       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:35:07.583505       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:35:07.583615       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:07.598007       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:35:07.599062       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:35:07.599384       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:35:07.599099       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:07.599630       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:07.599163       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:35:07.599820       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:35:07.599180       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:35:07.700602       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:35:07.700955       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:35:07.701012       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.564852    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.565457    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.587004    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-485426\" already exists" pod="kube-system/kube-apiserver-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.590249    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-485426\" already exists" pod="kube-system/etcd-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.593247    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-485426\" already exists" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.609598    3786 kubelet_node_status.go:124] "Node was previously registered" node="pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.609709    3786 kubelet_node_status.go:78] "Successfully registered node" node="pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.609736    3786 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.610753    3786 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.614475    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-485426\" already exists" pod="kube-system/etcd-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.614501    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.642799    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-485426\" already exists" pod="kube-system/kube-apiserver-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.642843    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.652754    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-485426\" already exists" pod="kube-system/kube-controller-manager-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.652816    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.667193    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-485426\" already exists" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.048285    3786 apiserver.go:52] "Watching apiserver"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.071049    3786 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.139778    3786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/580dd749-55c2-4ae3-91db-623ae52c0bb4-xtables-lock\") pod \"kube-proxy-54x7t\" (UID: \"580dd749-55c2-4ae3-91db-623ae52c0bb4\") " pod="kube-system/kube-proxy-54x7t"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.139819    3786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580dd749-55c2-4ae3-91db-623ae52c0bb4-lib-modules\") pod \"kube-proxy-54x7t\" (UID: \"580dd749-55c2-4ae3-91db-623ae52c0bb4\") " pod="kube-system/kube-proxy-54x7t"
	Nov 15 10:35:10 pause-485426 kubelet[3786]: I1115 10:35:10.975143    3786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:35:13 pause-485426 kubelet[3786]: E1115 10:35:13.221775    3786 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763202913219848077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 15 10:35:13 pause-485426 kubelet[3786]: E1115 10:35:13.221871    3786 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763202913219848077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 15 10:35:23 pause-485426 kubelet[3786]: E1115 10:35:23.224314    3786 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763202923223373240  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 15 10:35:23 pause-485426 kubelet[3786]: E1115 10:35:23.224341    3786 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763202923223373240  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-485426 -n pause-485426
helpers_test.go:269: (dbg) Run:  kubectl --context pause-485426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-485426 -n pause-485426
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-485426 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-485426 logs -n 25: (3.799253201s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ ssh     │ cert-options-636664 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                 │ cert-options-636664       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ ssh     │ -p cert-options-636664 -- sudo cat /etc/kubernetes/admin.conf                                                                                               │ cert-options-636664       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ delete  │ -p cert-options-636664                                                                                                                                      │ cert-options-636664       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ ssh     │ -p NoKubernetes-170129 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │                     │
	│ start   │ -p stopped-upgrade-814289 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-814289    │ jenkins │ v1.32.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ stop    │ -p NoKubernetes-170129                                                                                                                                      │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p NoKubernetes-170129 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ stop    │ -p kubernetes-upgrade-546745                                                                                                                                │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ ssh     │ -p NoKubernetes-170129 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p NoKubernetes-170129                                                                                                                                      │ NoKubernetes-170129       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p pause-485426 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-485426              │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ stop    │ stopped-upgrade-814289 stop                                                                                                                                 │ stopped-upgrade-814289    │ jenkins │ v1.32.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p stopped-upgrade-814289 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-814289    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p pause-485426 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-485426              │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ delete  │ -p kubernetes-upgrade-546745                                                                                                                                │ kubernetes-upgrade-546745 │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p guest-763099 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-763099              │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-814289 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-814289    │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ delete  │ -p stopped-upgrade-814289                                                                                                                                   │ stopped-upgrade-814289    │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ start   │ -p auto-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-765007               │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ start   │ -p cert-expiration-506364 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-506364    │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ start   │ -p kindnet-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-765007            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:04
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:04.867476  447023 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:04.867627  447023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:04.867637  447023 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:04.867643  447023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:04.867965  447023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:35:04.868581  447023 out.go:368] Setting JSON to false
	I1115 10:35:04.869929  447023 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8252,"bootTime":1763194653,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:35:04.870072  447023 start.go:143] virtualization: kvm guest
	I1115 10:35:04.872349  447023 out.go:179] * [kindnet-765007] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:35:04.874375  447023 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:35:04.874374  447023 notify.go:221] Checking for updates...
	I1115 10:35:04.875931  447023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:04.877373  447023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:35:04.878899  447023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 10:35:04.880271  447023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:35:04.881799  447023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:04.883702  447023 config.go:182] Loaded profile config "auto-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.883875  447023 config.go:182] Loaded profile config "cert-expiration-506364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.883978  447023 config.go:182] Loaded profile config "guest-763099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1115 10:35:04.884111  447023 config.go:182] Loaded profile config "pause-485426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.884217  447023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:04.928555  447023 out.go:179] * Using the kvm2 driver based on user configuration
	I1115 10:35:04.930026  447023 start.go:309] selected driver: kvm2
	I1115 10:35:04.930051  447023 start.go:930] validating driver "kvm2" against <nil>
	I1115 10:35:04.930070  447023 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:04.931341  447023 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:35:04.931685  447023 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:04.931724  447023 cni.go:84] Creating CNI manager for "kindnet"
	I1115 10:35:04.931733  447023 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:35:04.931791  447023 start.go:353] cluster config:
	{Name:kindnet-765007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-765007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:04.931919  447023 iso.go:125] acquiring lock: {Name:mke3d0b50f750b07aabde39a6bc9fa707eafad32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:04.934552  447023 out.go:179] * Starting "kindnet-765007" primary control-plane node in "kindnet-765007" cluster
	I1115 10:35:04.154838  446484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:04.213621  446484 api_server.go:72] duration metric: took 1.061580731s to wait for apiserver process to appear ...
	I1115 10:35:04.213655  446484 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:04.213703  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:04.214344  446484 api_server.go:269] stopped: https://192.168.39.9:8443/healthz: Get "https://192.168.39.9:8443/healthz": dial tcp 192.168.39.9:8443: connect: connection refused
	I1115 10:35:04.713849  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:07.363084  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:35:07.363118  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:35:07.363138  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:07.423347  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:35:07.423385  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:35:07.713756  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:07.722154  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:07.722187  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:08.214865  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:08.220007  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:35:08.220042  446484 api_server.go:103] status: https://192.168.39.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:35:08.714012  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:08.729175  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1115 10:35:08.744633  446484 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:08.744690  446484 api_server.go:131] duration metric: took 4.531003412s to wait for apiserver health ...
	I1115 10:35:08.744705  446484 cni.go:84] Creating CNI manager for ""
	I1115 10:35:08.744714  446484 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 10:35:08.746504  446484 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 10:35:08.748655  446484 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 10:35:08.774655  446484 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 10:35:08.813763  446484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:08.829692  446484 system_pods.go:59] 6 kube-system pods found
	I1115 10:35:08.829739  446484 system_pods.go:61] "coredns-66bc5c9577-5zzjr" [1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:08.829753  446484 system_pods.go:61] "etcd-pause-485426" [8b5c081b-8732-4d7f-87c4-59c24d96de14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:08.829767  446484 system_pods.go:61] "kube-apiserver-pause-485426" [d005f276-2ce2-4c2c-9285-40b8fc1047bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:08.829778  446484 system_pods.go:61] "kube-controller-manager-pause-485426" [a4db21f8-7cae-40bc-a464-c7bfd1fa7610] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:08.829786  446484 system_pods.go:61] "kube-proxy-54x7t" [580dd749-55c2-4ae3-91db-623ae52c0bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:35:08.829794  446484 system_pods.go:61] "kube-scheduler-pause-485426" [d9718ae3-cc0e-443c-b3af-6c40bffa84bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:08.829807  446484 system_pods.go:74] duration metric: took 16.014485ms to wait for pod list to return data ...
	I1115 10:35:08.829819  446484 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:08.840207  446484 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 10:35:08.840251  446484 node_conditions.go:123] node cpu capacity is 2
	I1115 10:35:08.840274  446484 node_conditions.go:105] duration metric: took 10.444265ms to run NodePressure ...
	I1115 10:35:08.840342  446484 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:35:09.166726  446484 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1115 10:35:09.170395  446484 kubeadm.go:744] kubelet initialised
	I1115 10:35:09.170428  446484 kubeadm.go:745] duration metric: took 3.67217ms waiting for restarted kubelet to initialise ...
	I1115 10:35:09.170452  446484 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:35:09.186134  446484 ops.go:34] apiserver oom_adj: -16
	I1115 10:35:09.186167  446484 kubeadm.go:602] duration metric: took 9.43934177s to restartPrimaryControlPlane
	I1115 10:35:09.186182  446484 kubeadm.go:403] duration metric: took 9.550188034s to StartCluster
	I1115 10:35:09.186206  446484 settings.go:142] acquiring lock: {Name:mk51bbf0fd9b357d299ebd118e728450a954032c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:09.186308  446484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:35:09.187231  446484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/kubeconfig: {Name:mk18351328d03342e92a234b66dd855b67ad51ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:09.187530  446484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:09.187605  446484 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:09.187858  446484 config.go:182] Loaded profile config "pause-485426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:09.189363  446484 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:09.190256  446484 out.go:179] * Enabled addons: 
	I1115 10:35:04.947872  446784 main.go:143] libmachine: waiting for domain to start...
	I1115 10:35:04.949579  446784 main.go:143] libmachine: domain is now running
	I1115 10:35:04.949601  446784 main.go:143] libmachine: waiting for IP...
	I1115 10:35:04.950567  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:04.951404  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:04.951425  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:04.951857  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:04.951928  446784 retry.go:31] will retry after 204.12322ms: waiting for domain to come up
	I1115 10:35:05.157363  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:05.158259  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:05.158283  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:05.158705  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:05.158752  446784 retry.go:31] will retry after 247.632117ms: waiting for domain to come up
	I1115 10:35:05.408324  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:05.409091  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:05.409115  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:05.409541  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:05.409601  446784 retry.go:31] will retry after 440.981833ms: waiting for domain to come up
	I1115 10:35:05.852696  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:05.853506  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:05.853522  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:05.854046  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:05.854090  446784 retry.go:31] will retry after 382.523756ms: waiting for domain to come up
	I1115 10:35:06.238948  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:06.239737  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:06.239767  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:06.240105  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:06.240145  446784 retry.go:31] will retry after 576.427015ms: waiting for domain to come up
	I1115 10:35:06.818027  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:06.818813  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:06.818836  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:06.819242  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:06.819289  446784 retry.go:31] will retry after 861.71118ms: waiting for domain to come up
	I1115 10:35:07.682480  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:07.683399  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:07.683428  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:07.683832  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:07.683877  446784 retry.go:31] will retry after 1.063502672s: waiting for domain to come up
	I1115 10:35:08.749045  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:08.749717  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:08.749734  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:08.750056  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:08.750094  446784 retry.go:31] will retry after 1.248064704s: waiting for domain to come up
	I1115 10:35:04.935804  447023 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:04.935848  447023 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:35:04.935869  447023 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:04.935989  447023 preload.go:238] Found /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:35:04.936005  447023 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:04.936143  447023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/config.json ...
	I1115 10:35:04.936169  447023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/config.json: {Name:mk80c7c7043866a72e212241a0b10c76cd171e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:04.936353  447023 start.go:360] acquireMachinesLock for kindnet-765007: {Name:mk50d09d451dfb6834d3dcf4331d8b4da7231bd5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 10:35:09.191143  446484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:09.191893  446484 addons.go:515] duration metric: took 4.292717ms for enable addons: enabled=[]
	I1115 10:35:09.402032  446484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:09.430567  446484 node_ready.go:35] waiting up to 6m0s for node "pause-485426" to be "Ready" ...
	I1115 10:35:09.433878  446484 node_ready.go:49] node "pause-485426" is "Ready"
	I1115 10:35:09.433930  446484 node_ready.go:38] duration metric: took 3.306323ms for node "pause-485426" to be "Ready" ...
	I1115 10:35:09.433953  446484 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:09.434022  446484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:09.455914  446484 api_server.go:72] duration metric: took 268.341896ms to wait for apiserver process to appear ...
	I1115 10:35:09.455948  446484 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:09.455976  446484 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1115 10:35:09.462967  446484 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1115 10:35:09.463965  446484 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:09.463991  446484 api_server.go:131] duration metric: took 8.033619ms to wait for apiserver health ...
	I1115 10:35:09.464002  446484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:09.467408  446484 system_pods.go:59] 6 kube-system pods found
	I1115 10:35:09.467454  446484 system_pods.go:61] "coredns-66bc5c9577-5zzjr" [1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:09.467467  446484 system_pods.go:61] "etcd-pause-485426" [8b5c081b-8732-4d7f-87c4-59c24d96de14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:09.467479  446484 system_pods.go:61] "kube-apiserver-pause-485426" [d005f276-2ce2-4c2c-9285-40b8fc1047bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:09.467492  446484 system_pods.go:61] "kube-controller-manager-pause-485426" [a4db21f8-7cae-40bc-a464-c7bfd1fa7610] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:09.467508  446484 system_pods.go:61] "kube-proxy-54x7t" [580dd749-55c2-4ae3-91db-623ae52c0bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:35:09.467547  446484 system_pods.go:61] "kube-scheduler-pause-485426" [d9718ae3-cc0e-443c-b3af-6c40bffa84bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:09.467564  446484 system_pods.go:74] duration metric: took 3.553322ms to wait for pod list to return data ...
	I1115 10:35:09.467579  446484 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:09.470190  446484 default_sa.go:45] found service account: "default"
	I1115 10:35:09.470217  446484 default_sa.go:55] duration metric: took 2.626387ms for default service account to be created ...
	I1115 10:35:09.470228  446484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:09.473196  446484 system_pods.go:86] 6 kube-system pods found
	I1115 10:35:09.473237  446484 system_pods.go:89] "coredns-66bc5c9577-5zzjr" [1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:09.473250  446484 system_pods.go:89] "etcd-pause-485426" [8b5c081b-8732-4d7f-87c4-59c24d96de14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:09.473260  446484 system_pods.go:89] "kube-apiserver-pause-485426" [d005f276-2ce2-4c2c-9285-40b8fc1047bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:09.473269  446484 system_pods.go:89] "kube-controller-manager-pause-485426" [a4db21f8-7cae-40bc-a464-c7bfd1fa7610] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:09.473278  446484 system_pods.go:89] "kube-proxy-54x7t" [580dd749-55c2-4ae3-91db-623ae52c0bb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:35:09.473286  446484 system_pods.go:89] "kube-scheduler-pause-485426" [d9718ae3-cc0e-443c-b3af-6c40bffa84bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:09.473305  446484 system_pods.go:126] duration metric: took 3.068275ms to wait for k8s-apps to be running ...
	I1115 10:35:09.473318  446484 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:09.473386  446484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:09.502561  446484 system_svc.go:56] duration metric: took 29.228306ms WaitForService to wait for kubelet
	I1115 10:35:09.502597  446484 kubeadm.go:587] duration metric: took 315.034762ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:09.502625  446484 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:09.505333  446484 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 10:35:09.505371  446484 node_conditions.go:123] node cpu capacity is 2
	I1115 10:35:09.505391  446484 node_conditions.go:105] duration metric: took 2.759026ms to run NodePressure ...
	I1115 10:35:09.505411  446484 start.go:242] waiting for startup goroutines ...
	I1115 10:35:09.505426  446484 start.go:247] waiting for cluster config update ...
	I1115 10:35:09.505441  446484 start.go:256] writing updated cluster config ...
	I1115 10:35:09.505890  446484 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:09.513131  446484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:09.514071  446484 kapi.go:59] client config for pause-485426: &rest.Config{Host:"https://192.168.39.9:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/profiles/pause-485426/client.key", CAFile:"/home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:35:09.516892  446484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5zzjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:11.025108  446484 pod_ready.go:94] pod "coredns-66bc5c9577-5zzjr" is "Ready"
	I1115 10:35:11.025156  446484 pod_ready.go:86] duration metric: took 1.508237033s for pod "coredns-66bc5c9577-5zzjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:11.028462  446484 pod_ready.go:83] waiting for pod "etcd-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:13.036070  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	I1115 10:35:09.999939  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:10.000871  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:10.000901  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:10.001396  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:10.001453  446784 retry.go:31] will retry after 1.398285842s: waiting for domain to come up
	I1115 10:35:11.402122  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:11.402819  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:11.402837  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:11.403174  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:11.403208  446784 retry.go:31] will retry after 1.520876771s: waiting for domain to come up
	I1115 10:35:12.926064  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:12.926871  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:12.926903  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:12.927407  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:12.927460  446784 retry.go:31] will retry after 2.096829655s: waiting for domain to come up
	W1115 10:35:15.534875  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	W1115 10:35:17.535312  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	I1115 10:35:15.026834  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:15.027831  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:15.027875  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:15.028425  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:15.028478  446784 retry.go:31] will retry after 2.635595032s: waiting for domain to come up
	I1115 10:35:17.665915  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:17.666480  446784 main.go:143] libmachine: no network interface addresses found for domain auto-765007 (source=lease)
	I1115 10:35:17.666497  446784 main.go:143] libmachine: trying to list again with source=arp
	I1115 10:35:17.666828  446784 main.go:143] libmachine: unable to find current IP address of domain auto-765007 in network mk-auto-765007 (interfaces detected: [])
	I1115 10:35:17.666870  446784 retry.go:31] will retry after 2.822178743s: waiting for domain to come up
	I1115 10:35:21.993533  446878 start.go:364] duration metric: took 22.600643222s to acquireMachinesLock for "cert-expiration-506364"
	I1115 10:35:21.993596  446878 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:35:21.993603  446878 fix.go:54] fixHost starting: 
	I1115 10:35:21.996096  446878 fix.go:112] recreateIfNeeded on cert-expiration-506364: state=Running err=<nil>
	W1115 10:35:21.996119  446878 fix.go:138] unexpected machine state, will restart: <nil>
	W1115 10:35:20.034391  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	W1115 10:35:22.035234  446484 pod_ready.go:104] pod "etcd-pause-485426" is not "Ready", error: <nil>
	I1115 10:35:20.491026  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.491723  446784 main.go:143] libmachine: domain auto-765007 has current primary IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.491745  446784 main.go:143] libmachine: found domain IP: 192.168.61.247
	I1115 10:35:20.491757  446784 main.go:143] libmachine: reserving static IP address...
	I1115 10:35:20.492104  446784 main.go:143] libmachine: unable to find host DHCP lease matching {name: "auto-765007", mac: "52:54:00:aa:35:52", ip: "192.168.61.247"} in network mk-auto-765007
	I1115 10:35:20.718216  446784 main.go:143] libmachine: reserved static IP address 192.168.61.247 for domain auto-765007
	I1115 10:35:20.718246  446784 main.go:143] libmachine: waiting for SSH...
	I1115 10:35:20.718255  446784 main.go:143] libmachine: Getting to WaitForSSH function...
	I1115 10:35:20.721163  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.721621  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:20.721673  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.721919  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:20.722233  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:20.722250  446784 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1115 10:35:20.831369  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:20.831841  446784 main.go:143] libmachine: domain creation complete
	I1115 10:35:20.833677  446784 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:20.836369  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.836887  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:20.836928  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.837233  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:20.837460  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:20.837470  446784 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:20.949322  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1115 10:35:20.949359  446784 buildroot.go:166] provisioning hostname "auto-765007"
	I1115 10:35:20.952459  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.952977  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:20.953017  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:20.953291  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:20.953499  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:20.953511  446784 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-765007 && echo "auto-765007" | sudo tee /etc/hostname
	I1115 10:35:21.083155  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-765007
	
	I1115 10:35:21.086811  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.087318  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.087351  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.087614  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:21.087898  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:21.087922  446784 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-765007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-765007/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-765007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:21.209018  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:21.209058  446784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 10:35:21.209078  446784 buildroot.go:174] setting up certificates
	I1115 10:35:21.209089  446784 provision.go:84] configureAuth start
	I1115 10:35:21.212056  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.212488  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.212514  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.214682  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.215126  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.215150  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.215292  446784 provision.go:143] copyHostCerts
	I1115 10:35:21.215363  446784 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem, removing ...
	I1115 10:35:21.215381  446784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem
	I1115 10:35:21.215445  446784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 10:35:21.215559  446784 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem, removing ...
	I1115 10:35:21.215572  446784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem
	I1115 10:35:21.215616  446784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 10:35:21.215731  446784 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem, removing ...
	I1115 10:35:21.215743  446784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem
	I1115 10:35:21.215769  446784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 10:35:21.215840  446784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.auto-765007 san=[127.0.0.1 192.168.61.247 auto-765007 localhost minikube]
	I1115 10:35:21.297171  446784 provision.go:177] copyRemoteCerts
	I1115 10:35:21.297249  446784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:21.299913  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.300426  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.300473  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.300722  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:21.388651  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:21.421045  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 10:35:21.452284  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:35:21.484782  446784 provision.go:87] duration metric: took 275.673851ms to configureAuth
	I1115 10:35:21.484826  446784 buildroot.go:189] setting minikube options for container-runtime
	I1115 10:35:21.485079  446784 config.go:182] Loaded profile config "auto-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:21.488464  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.488941  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.488979  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.489195  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:21.489409  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:21.489431  446784 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:21.733790  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:21.733825  446784 machine.go:97] duration metric: took 900.127474ms to provisionDockerMachine
	I1115 10:35:21.733841  446784 client.go:176] duration metric: took 18.840528393s to LocalClient.Create
	I1115 10:35:21.733864  446784 start.go:167] duration metric: took 18.840601444s to libmachine.API.Create "auto-765007"
	I1115 10:35:21.733885  446784 start.go:293] postStartSetup for "auto-765007" (driver="kvm2")
	I1115 10:35:21.733911  446784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:21.734006  446784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:21.736865  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.737305  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.737332  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.737479  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:21.824501  446784 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:21.829432  446784 info.go:137] Remote host: Buildroot 2025.02
	I1115 10:35:21.829464  446784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/addons for local assets ...
	I1115 10:35:21.829549  446784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-412813/.minikube/files for local assets ...
	I1115 10:35:21.829644  446784 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem -> 4168012.pem in /etc/ssl/certs
	I1115 10:35:21.829786  446784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:21.843508  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/ssl/certs/4168012.pem --> /etc/ssl/certs/4168012.pem (1708 bytes)
	I1115 10:35:21.874047  446784 start.go:296] duration metric: took 140.141188ms for postStartSetup
	I1115 10:35:21.877614  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.878065  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.878094  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.878302  446784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/config.json ...
	I1115 10:35:21.878488  446784 start.go:128] duration metric: took 18.987728003s to createHost
	I1115 10:35:21.880739  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.881158  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.881180  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.881329  446784 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:21.881506  446784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I1115 10:35:21.881514  446784 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 10:35:21.993338  446784 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763202921.955406441
	
	I1115 10:35:21.993365  446784 fix.go:216] guest clock: 1763202921.955406441
	I1115 10:35:21.993375  446784 fix.go:229] Guest: 2025-11-15 10:35:21.955406441 +0000 UTC Remote: 2025-11-15 10:35:21.878499848 +0000 UTC m=+32.122616878 (delta=76.906593ms)
	I1115 10:35:21.993400  446784 fix.go:200] guest clock delta is within tolerance: 76.906593ms
	I1115 10:35:21.993407  446784 start.go:83] releasing machines lock for "auto-765007", held for 19.10288731s
	I1115 10:35:21.997038  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.997486  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:21.997516  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:21.998088  446784 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:21.998151  446784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:22.002101  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002111  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002636  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:22.002675  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:22.002682  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002708  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:22.002902  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:22.003186  446784 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/auto-765007/id_rsa Username:docker}
	I1115 10:35:22.089064  446784 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:22.115266  446784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:22.283472  446784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:22.291315  446784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:22.291395  446784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:22.312644  446784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:35:22.312687  446784 start.go:496] detecting cgroup driver to use...
	I1115 10:35:22.312758  446784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:22.334009  446784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:22.352971  446784 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:22.353037  446784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:22.372004  446784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:22.392235  446784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:22.559843  446784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:22.768836  446784 docker.go:234] disabling docker service ...
	I1115 10:35:22.768905  446784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:22.789236  446784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:22.805697  446784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:22.990252  446784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:23.163022  446784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:23.178985  446784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:23.202017  446784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:23.202078  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.215395  446784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:23.215488  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.231191  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.243531  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.255861  446784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:23.268575  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.280799  446784 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.300974  446784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:23.313744  446784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:23.324132  446784 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 10:35:23.324195  446784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 10:35:23.344585  446784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:23.356233  446784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:23.494556  446784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:23.600869  446784 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:23.600987  446784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:23.606080  446784 start.go:564] Will wait 60s for crictl version
	I1115 10:35:23.606146  446784 ssh_runner.go:195] Run: which crictl
	I1115 10:35:23.610129  446784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 10:35:23.649185  446784 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 10:35:23.649277  446784 ssh_runner.go:195] Run: crio --version
	I1115 10:35:23.681373  446784 ssh_runner.go:195] Run: crio --version
	I1115 10:35:23.715049  446784 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1115 10:35:21.998417  446878 out.go:252] * Updating the running kvm2 "cert-expiration-506364" VM ...
	I1115 10:35:21.998440  446878 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:22.002293  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.002771  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.002815  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.003386  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.003649  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.003656  446878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:22.119758  446878 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-506364
	
	I1115 10:35:22.119797  446878 buildroot.go:166] provisioning hostname "cert-expiration-506364"
	I1115 10:35:22.122999  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.123487  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.123520  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.123728  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.124026  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.124038  446878 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-506364 && echo "cert-expiration-506364" | sudo tee /etc/hostname
	I1115 10:35:22.260001  446878 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-506364
	
	I1115 10:35:22.263187  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.263587  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.263606  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.263775  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.263992  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.264001  446878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-506364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-506364/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-506364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:22.386050  446878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:22.386069  446878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21894-412813/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-412813/.minikube}
	I1115 10:35:22.386087  446878 buildroot.go:174] setting up certificates
	I1115 10:35:22.386095  446878 provision.go:84] configureAuth start
	I1115 10:35:22.389773  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.390236  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.390256  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.393522  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.393971  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.393990  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.394218  446878 provision.go:143] copyHostCerts
	I1115 10:35:22.394302  446878 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem, removing ...
	I1115 10:35:22.394319  446878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem
	I1115 10:35:22.394393  446878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/ca.pem (1082 bytes)
	I1115 10:35:22.394568  446878 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem, removing ...
	I1115 10:35:22.394577  446878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem
	I1115 10:35:22.394636  446878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/cert.pem (1123 bytes)
	I1115 10:35:22.394846  446878 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem, removing ...
	I1115 10:35:22.394854  446878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem
	I1115 10:35:22.394892  446878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-412813/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-412813/.minikube/key.pem (1675 bytes)
	I1115 10:35:22.394985  446878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-506364 san=[127.0.0.1 192.168.50.33 cert-expiration-506364 localhost minikube]
	I1115 10:35:22.762541  446878 provision.go:177] copyRemoteCerts
	I1115 10:35:22.762604  446878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:22.765379  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.765776  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.765799  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.765960  446878 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/cert-expiration-506364/id_rsa Username:docker}
	I1115 10:35:22.864091  446878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:22.897868  446878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:35:22.935039  446878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:35:22.972305  446878 provision.go:87] duration metric: took 586.191143ms to configureAuth
	I1115 10:35:22.972332  446878 buildroot.go:189] setting minikube options for container-runtime
	I1115 10:35:22.972546  446878 config.go:182] Loaded profile config "cert-expiration-506364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:22.976130  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.976635  446878 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:80:f3", ip: ""} in network mk-cert-expiration-506364: {Iface:virbr2 ExpiryTime:2025-11-15 11:31:32 +0000 UTC Type:0 Mac:52:54:00:66:80:f3 Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:cert-expiration-506364 Clientid:01:52:54:00:66:80:f3}
	I1115 10:35:22.976656  446878 main.go:143] libmachine: domain cert-expiration-506364 has defined IP address 192.168.50.33 and MAC address 52:54:00:66:80:f3 in network mk-cert-expiration-506364
	I1115 10:35:22.976860  446878 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:22.977088  446878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I1115 10:35:22.977097  446878 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:23.719297  446784 main.go:143] libmachine: domain auto-765007 has defined MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:23.719740  446784 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:35:52", ip: ""} in network mk-auto-765007: {Iface:virbr3 ExpiryTime:2025-11-15 11:35:18 +0000 UTC Type:0 Mac:52:54:00:aa:35:52 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:auto-765007 Clientid:01:52:54:00:aa:35:52}
	I1115 10:35:23.719770  446784 main.go:143] libmachine: domain auto-765007 has defined IP address 192.168.61.247 and MAC address 52:54:00:aa:35:52 in network mk-auto-765007
	I1115 10:35:23.720038  446784 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:23.724993  446784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:23.739932  446784 kubeadm.go:884] updating cluster {Name:auto-765007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-765007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:23.740089  446784 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:23.740154  446784 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:23.774309  446784 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 10:35:23.774384  446784 ssh_runner.go:195] Run: which lz4
	I1115 10:35:23.778780  446784 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1115 10:35:23.783398  446784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1115 10:35:23.783439  446784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1115 10:35:24.035621  446484 pod_ready.go:94] pod "etcd-pause-485426" is "Ready"
	I1115 10:35:24.035684  446484 pod_ready.go:86] duration metric: took 13.007193766s for pod "etcd-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.039427  446484 pod_ready.go:83] waiting for pod "kube-apiserver-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.046743  446484 pod_ready.go:94] pod "kube-apiserver-pause-485426" is "Ready"
	I1115 10:35:24.046784  446484 pod_ready.go:86] duration metric: took 7.324303ms for pod "kube-apiserver-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.054471  446484 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.064018  446484 pod_ready.go:94] pod "kube-controller-manager-pause-485426" is "Ready"
	I1115 10:35:24.064057  446484 pod_ready.go:86] duration metric: took 9.5451ms for pod "kube-controller-manager-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.068631  446484 pod_ready.go:83] waiting for pod "kube-proxy-54x7t" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.233075  446484 pod_ready.go:94] pod "kube-proxy-54x7t" is "Ready"
	I1115 10:35:24.233118  446484 pod_ready.go:86] duration metric: took 164.429491ms for pod "kube-proxy-54x7t" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.433253  446484 pod_ready.go:83] waiting for pod "kube-scheduler-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.834003  446484 pod_ready.go:94] pod "kube-scheduler-pause-485426" is "Ready"
	I1115 10:35:24.834037  446484 pod_ready.go:86] duration metric: took 400.747769ms for pod "kube-scheduler-pause-485426" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.834054  446484 pod_ready.go:40] duration metric: took 15.320890226s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:24.901924  446484 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:35:24.904339  446484 out.go:179] * Done! kubectl is now configured to use "pause-485426" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.808832108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc59ae91-7b3c-494f-b2f1-b86c82cc8677 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.809287271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc59ae91-7b3c-494f-b2f1-b86c82cc8677 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.859666083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c9ad39c-f23e-48be-bea0-a7a9480528ba name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.859737175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c9ad39c-f23e-48be-bea0-a7a9480528ba name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.860953948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=179ecdef-d42f-4a38-8cac-d5220453bd61 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.862114212Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=4a209be7-901e-40d4-8818-5954d56d30d1 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.862290418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202927862263449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=179ecdef-d42f-4a38-8cac-d5220453bd61 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.862310493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a209be7-901e-40d4-8818-5954d56d30d1 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.863029728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d5dadde-5d88-4bc8-9011-1297ec41a427 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.863079606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d5dadde-5d88-4bc8-9011-1297ec41a427 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.863466523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d5dadde-5d88-4bc8-9011-1297ec41a427 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.906918201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7382e26e-3ba7-45f1-82d6-4fbec4d29041 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.906991549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7382e26e-3ba7-45f1-82d6-4fbec4d29041 name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.908276376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=821e02d7-aa77-4080-9eb6-85338238632c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.909412287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202927909377315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=821e02d7-aa77-4080-9eb6-85338238632c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.910539335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28abab7c-9fbf-44f0-bdad-384add931134 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.910733932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28abab7c-9fbf-44f0-bdad-384add931134 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.911170994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28abab7c-9fbf-44f0-bdad-384add931134 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.956900243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f10a01a-30ae-4145-be05-0bd76025b65c name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.956983683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f10a01a-30ae-4145-be05-0bd76025b65c name=/runtime.v1.RuntimeService/Version
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.958323556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90e92218-f521-491c-bf85-b5aa379051cf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.958698657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763202927958677927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90e92218-f521-491c-bf85-b5aa379051cf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.959352515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=571edf69-9ba9-49a9-b99b-a7688178760a name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.959507976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=571edf69-9ba9-49a9-b99b-a7688178760a name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 10:35:27 pause-485426 crio[3324]: time="2025-11-15 10:35:27.959790771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2,PodSandboxId:1c348243b646035508a12e688f36d749f6e28fedd540475357b18d678b1e97a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763202908917282694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d,PodSandboxId:0ded225e6bd02f37d52a810a5b09a8d5017cbd0235e3e4c38fd95094eaf91789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763202908582660501,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8,PodSandboxId:9775ab565f6143079bcfc00b89b67abbe9948478f799a1571d1f8ce3ba9ac33d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763202903987341279,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3,PodSandboxId:3f30fd7462ca9308c688ff77f836f38094f080eabddf19092add6096c8cc361e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1763202904021822140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c,PodSandboxId:3324833e652d54dd85440d186105e467255a38a4643f0fbdf65a700fc7577b0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763202903957842134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819,PodSandboxId:29da08b3b79e8d789bb96567a3e9f3d515da85de535af017920f1a274399058c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763202903954059878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614,PodSandboxId:0ce42b78902496eec100d1819eea1cefddecb9c023bb07
3cae1cce3e0ba4055d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763202887441542482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5zzjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e0e6c3f-69b7-4d3e-abf7-b1fe20a07d96,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586,PodSandboxId:7b5f96b40a72d3217034f233c772c2c66474bbb7cfba09aaffc18d4aa1396156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763202886450140991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54x7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 580dd749-55c2-4ae3-91db-623ae52c0bb4,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa,PodSandboxId:b3cf3a3dbb14a581f9af23c753ff28ebe08b43cd035b8c264bf16d1934cff91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763202886430392642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dae304c504fc5effd23cf9ebb9daa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9,PodSandboxId:396d05d411c53ea28fe6c420a3d5acf535b410c7f3549feab56de9a9effcb564,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763202886347399637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bf336465658494145b970b817f83aa,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a,PodSandboxId:2dede2ee6e6deb52e00401e7bb37ac5ed1fe3698dfcfdfef27fdbb20c1f7a9cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763202886247667558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-485426,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 1cd57b179067198663917e132bf01ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2,PodSandboxId:44371065afe41fb0028c2817da45149617d059cef9da751aa3aab83fb7d95b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763202886071520943,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-485426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b1e7356be0998212573bd481c46e9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=571edf69-9ba9-49a9-b99b-a7688178760a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	48e5fa00a0f40       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   19 seconds ago      Running             coredns                   2                   1c348243b6460       coredns-66bc5c9577-5zzjr
	fa458fc484f9b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   19 seconds ago      Running             kube-proxy                2                   0ded225e6bd02       kube-proxy-54x7t
	f833090b86dd6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   24 seconds ago      Running             kube-scheduler            2                   3f30fd7462ca9       kube-scheduler-pause-485426
	7dffb899e3c94       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   24 seconds ago      Running             etcd                      2                   9775ab565f614       etcd-pause-485426
	206a71517720c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   24 seconds ago      Running             kube-controller-manager   2                   3324833e652d5       kube-controller-manager-pause-485426
	203bc1a24c28d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   24 seconds ago      Running             kube-apiserver            2                   29da08b3b79e8       kube-apiserver-pause-485426
	072cab25500ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   40 seconds ago      Exited              coredns                   1                   0ce42b7890249       coredns-66bc5c9577-5zzjr
	e51cba8382c59       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   41 seconds ago      Exited              kube-proxy                1                   7b5f96b40a72d       kube-proxy-54x7t
	e047e4281b937       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   41 seconds ago      Exited              etcd                      1                   b3cf3a3dbb14a       etcd-pause-485426
	ebd7d2679e0bd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   41 seconds ago      Exited              kube-scheduler            1                   396d05d411c53       kube-scheduler-pause-485426
	a97c92af3b09c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   41 seconds ago      Exited              kube-controller-manager   1                   2dede2ee6e6de       kube-controller-manager-pause-485426
	28f9ab2457f8c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   41 seconds ago      Exited              kube-apiserver            1                   44371065afe41       kube-apiserver-pause-485426
	
	
	==> coredns [072cab25500ad828f859389aa99ad420f77de099eea62e60b0aec940cedb9614] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:32888 - 1397 "HINFO IN 7800171720486180570.6168994105347819983. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111726383s
	
	
	==> coredns [48e5fa00a0f405e7883fa1c8b8d3383c08416544a3a25b5514301bb63a61c3e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54481 - 44401 "HINFO IN 6932698302414610845.45722158611507688. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.109579926s
	
	
	==> describe nodes <==
	Name:               pause-485426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-485426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=pause-485426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-485426
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:07 +0000   Sat, 15 Nov 2025 10:34:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    pause-485426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 57de329d51b54dab841010c0d66eb064
	  System UUID:                57de329d-51b5-4dab-8410-10c0d66eb064
	  Boot ID:                    cc932c67-7aae-4cc8-8638-3edb2c6319e0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5zzjr                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     61s
	  kube-system                 etcd-pause-485426                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         66s
	  kube-system                 kube-apiserver-pause-485426             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-pause-485426    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-54x7t                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-pause-485426             100m (5%)     0 (0%)      0 (0%)           0 (0%)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node pause-485426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node pause-485426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node pause-485426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    66s                kubelet          Node pause-485426 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  66s                kubelet          Node pause-485426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     66s                kubelet          Node pause-485426 status is now: NodeHasSufficientPID
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  NodeReady                65s                kubelet          Node pause-485426 status is now: NodeReady
	  Normal  RegisteredNode           62s                node-controller  Node pause-485426 event: Registered Node pause-485426 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-485426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-485426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-485426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-485426 event: Registered Node pause-485426 in Controller
	
	
	==> dmesg <==
	[Nov15 10:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001477] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000307] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Nov15 10:34] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.098470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.106476] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.151967] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.020480] kauditd_printk_skb: 18 callbacks suppressed
	[  +3.414057] kauditd_printk_skb: 219 callbacks suppressed
	[  +5.983517] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.021107] kauditd_printk_skb: 275 callbacks suppressed
	[Nov15 10:35] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.577715] kauditd_printk_skb: 122 callbacks suppressed
	
	
	==> etcd [7dffb899e3c941b47dba0a091877f8705d0a0b12bae36e0fe309c318b36636b8] <==
	{"level":"warn","ts":"2025-11-15T10:35:06.157759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.195734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.233852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.251769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.269724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.284370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.302376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.320494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.335165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.349633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.361936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.387392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.399261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.410114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.422798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.441584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.454994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.470290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.484589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.504153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.531789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.542350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.567419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.575777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:06.664483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	
	
	==> etcd [e047e4281b937590db75cab3044ef73ebe60878feca1c0e13c0453f2a8b292fa] <==
	{"level":"info","ts":"2025-11-15T10:34:48.151354Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-11-15T10:34:48.182421Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-15T10:34:48.184068Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:34:48.207750Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.9:2379"}
	{"level":"info","ts":"2025-11-15T10:34:48.223081Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:34:48.231281Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-485426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	{"level":"info","ts":"2025-11-15T10:34:48.243324Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-15T10:34:48.259926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:37604: use of closed network connection"}
	2025/11/15 10:34:48 WARNING: [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"error","ts":"2025-11-15T10:34:48.267026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:34:48.268704Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:34:48.270139Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:34:48.270308Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e6c05fccff8d5b5b","current-leader-member-id":"e6c05fccff8d5b5b"}
	{"level":"info","ts":"2025-11-15T10:34:48.270709Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:34:48.270809Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271353Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271415Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:34:48.271431Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271491Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:34:48.271529Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:34:48.271543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.9:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:34:48.275142Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"error","ts":"2025-11-15T10:34:48.275280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.9:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:34:48.275317Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2025-11-15T10:34:48.275352Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-485426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	
	
	==> kernel <==
	 10:35:28 up 1 min,  0 users,  load average: 2.15, 0.74, 0.26
	Linux pause-485426 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [203bc1a24c28d8659d0d4f3d131d905fca51504656d0cb507f8ea02856c53819] <==
	I1115 10:35:07.457695       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:35:07.462855       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:35:07.472775       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:35:07.472861       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:35:07.472880       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:35:07.472896       1 cache.go:39] Caches are synced for autoregister controller
	E1115 10:35:07.487469       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:35:07.489688       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:35:07.499110       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:07.502717       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:35:07.502833       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:35:07.502969       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:35:07.507098       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:35:07.507147       1 policy_source.go:240] refreshing policies
	I1115 10:35:07.536333       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:07.560706       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:08.148040       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:08.294780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:09.022746       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:09.112864       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:35:09.149945       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:09.157092       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:11.004465       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:11.154367       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:35:11.204384       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [28f9ab2457f8c88e7ef3df4d2e6ebe6395e82b4142d3688b9f2aff46aecc4fc2] <==
	I1115 10:34:46.884955       1 options.go:263] external host was not specified, using 192.168.39.9
	I1115 10:34:46.899852       1 server.go:150] Version: v1.34.1
	I1115 10:34:46.901321       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [206a71517720cbf3f74c8a02bf0ca1358007532c62e6d904606290a78ea07e7c] <==
	I1115 10:35:10.825517       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:35:10.829000       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:10.829299       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:10.831481       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:35:10.832336       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:35:10.835445       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:35:10.836791       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:10.837976       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:35:10.839237       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:35:10.840513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:35:10.840570       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:35:10.841748       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:35:10.849505       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:35:10.850693       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:35:10.850750       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:10.850864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:10.850887       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:35:10.850903       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:35:10.850857       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:35:10.851080       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:35:10.851118       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:35:10.857667       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:10.874362       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:35:10.874838       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:35:10.887325       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-controller-manager [a97c92af3b09cbef4782c6bb135ce536647dcb9bf25ce7910f0a132f9c2ad75a] <==
	
	
	==> kube-proxy [e51cba8382c592b4f3d871ac38eb8b58fdd338f4202156ee31de473f1da68586] <==
	
	
	==> kube-proxy [fa458fc484f9b6eaac0fa430788377e60b41f61eaec0cad8e1b535cb398bd34d] <==
	I1115 10:35:08.922668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:09.023673       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:09.023724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.9"]
	E1115 10:35:09.023836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:09.084337       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1115 10:35:09.084491       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1115 10:35:09.084528       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:09.104752       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:09.105426       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:09.105452       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:09.115930       1 config.go:200] "Starting service config controller"
	I1115 10:35:09.118724       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:09.116709       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:09.122334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:09.116659       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:09.122377       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:09.119408       1 config.go:309] "Starting node config controller"
	I1115 10:35:09.122406       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:09.122420       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:09.220857       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:35:09.223193       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:35:09.223463       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ebd7d2679e0bde6320b44d8a66ba26d0838aea8381c6c6742e529909aa8ff9f9] <==
	
	
	==> kube-scheduler [f833090b86dd6b855234bf76cc7a8f9f71a2b76cc403f3018c45cf2625ec24b3] <==
	I1115 10:35:05.357529       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:35:07.583505       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:35:07.583615       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:07.598007       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:35:07.599062       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:35:07.599384       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:35:07.599099       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:07.599630       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:07.599163       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:35:07.599820       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:35:07.599180       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:35:07.700602       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:35:07.700955       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:35:07.701012       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.564852    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.565457    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.587004    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-485426\" already exists" pod="kube-system/kube-apiserver-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.590249    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-485426\" already exists" pod="kube-system/etcd-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.593247    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-485426\" already exists" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.609598    3786 kubelet_node_status.go:124] "Node was previously registered" node="pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.609709    3786 kubelet_node_status.go:78] "Successfully registered node" node="pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.609736    3786 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.610753    3786 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.614475    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-485426\" already exists" pod="kube-system/etcd-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.614501    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.642799    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-485426\" already exists" pod="kube-system/kube-apiserver-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.642843    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.652754    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-485426\" already exists" pod="kube-system/kube-controller-manager-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: I1115 10:35:07.652816    3786 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:07 pause-485426 kubelet[3786]: E1115 10:35:07.667193    3786 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-485426\" already exists" pod="kube-system/kube-scheduler-pause-485426"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.048285    3786 apiserver.go:52] "Watching apiserver"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.071049    3786 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.139778    3786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/580dd749-55c2-4ae3-91db-623ae52c0bb4-xtables-lock\") pod \"kube-proxy-54x7t\" (UID: \"580dd749-55c2-4ae3-91db-623ae52c0bb4\") " pod="kube-system/kube-proxy-54x7t"
	Nov 15 10:35:08 pause-485426 kubelet[3786]: I1115 10:35:08.139819    3786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580dd749-55c2-4ae3-91db-623ae52c0bb4-lib-modules\") pod \"kube-proxy-54x7t\" (UID: \"580dd749-55c2-4ae3-91db-623ae52c0bb4\") " pod="kube-system/kube-proxy-54x7t"
	Nov 15 10:35:10 pause-485426 kubelet[3786]: I1115 10:35:10.975143    3786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:35:13 pause-485426 kubelet[3786]: E1115 10:35:13.221775    3786 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763202913219848077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 15 10:35:13 pause-485426 kubelet[3786]: E1115 10:35:13.221871    3786 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763202913219848077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 15 10:35:23 pause-485426 kubelet[3786]: E1115 10:35:23.224314    3786 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763202923223373240  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 15 10:35:23 pause-485426 kubelet[3786]: E1115 10:35:23.224341    3786 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763202923223373240  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-485426 -n pause-485426
helpers_test.go:269: (dbg) Run:  kubectl --context pause-485426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.00s)

                                                
                                    

Test pass (308/346)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 4.2
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.57
18 TestDownloadOnly/v1.34.1/DeleteAll 0.17
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.68
22 TestOffline 78.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 127.02
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.53
35 TestAddons/parallel/Registry 20.5
36 TestAddons/parallel/RegistryCreds 0.71
38 TestAddons/parallel/InspektorGadget 10.79
39 TestAddons/parallel/MetricsServer 5.83
41 TestAddons/parallel/CSI 61.52
42 TestAddons/parallel/Headlamp 22.3
43 TestAddons/parallel/CloudSpanner 5.58
44 TestAddons/parallel/LocalPath 56.95
45 TestAddons/parallel/NvidiaDevicePlugin 6.71
46 TestAddons/parallel/Yakd 11.76
48 TestAddons/StoppedEnableDisable 87.38
49 TestCertOptions 57.78
50 TestCertExpiration 319.27
52 TestForceSystemdFlag 77.63
53 TestForceSystemdEnv 41.65
58 TestErrorSpam/setup 35.2
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.67
61 TestErrorSpam/pause 1.56
62 TestErrorSpam/unpause 1.79
63 TestErrorSpam/stop 4.92
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 56.41
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 34.68
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
75 TestFunctional/serial/CacheCmd/cache/add_local 2.09
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 36.19
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.43
86 TestFunctional/serial/LogsFileCmd 1.44
87 TestFunctional/serial/InvalidService 4.3
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 18.17
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 17.64
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 41.62
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.27
103 TestFunctional/parallel/MySQL 26.87
104 TestFunctional/parallel/FileSync 0.17
105 TestFunctional/parallel/CertSync 1.1
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
113 TestFunctional/parallel/License 0.45
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
116 TestFunctional/parallel/ProfileCmd/profile_list 0.46
117 TestFunctional/parallel/MountCmd/any-port 9.15
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
119 TestFunctional/parallel/ServiceCmd/List 0.46
120 TestFunctional/parallel/MountCmd/specific-port 1.4
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
123 TestFunctional/parallel/ServiceCmd/Format 0.32
124 TestFunctional/parallel/ServiceCmd/URL 0.33
125 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.55
131 TestFunctional/parallel/Version/short 0.07
132 TestFunctional/parallel/Version/components 0.56
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
146 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
147 TestFunctional/parallel/ImageCommands/Setup 1.56
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.44
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.11
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 203.14
163 TestMultiControlPlane/serial/DeployApp 7.61
164 TestMultiControlPlane/serial/PingHostFromPods 1.34
165 TestMultiControlPlane/serial/AddWorkerNode 46.31
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
168 TestMultiControlPlane/serial/CopyFile 11.03
169 TestMultiControlPlane/serial/StopSecondaryNode 85.51
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
171 TestMultiControlPlane/serial/RestartSecondaryNode 36.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 353.56
174 TestMultiControlPlane/serial/DeleteSecondaryNode 18.56
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
176 TestMultiControlPlane/serial/StopCluster 248.41
177 TestMultiControlPlane/serial/RestartCluster 113.7
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
179 TestMultiControlPlane/serial/AddSecondaryNode 71.94
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
185 TestJSONOutput/start/Command 54.46
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.72
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.22
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 76.78
217 TestMountStart/serial/StartWithMountFirst 23.11
218 TestMountStart/serial/VerifyMountFirst 0.3
219 TestMountStart/serial/StartWithMountSecond 19.83
220 TestMountStart/serial/VerifyMountSecond 0.32
221 TestMountStart/serial/DeleteFirst 0.72
222 TestMountStart/serial/VerifyMountPostDelete 0.31
223 TestMountStart/serial/Stop 1.26
224 TestMountStart/serial/RestartStopped 18.12
225 TestMountStart/serial/VerifyMountPostStop 0.3
228 TestMultiNode/serial/FreshStart2Nodes 100.87
229 TestMultiNode/serial/DeployApp2Nodes 6.25
230 TestMultiNode/serial/PingHostFrom2Pods 0.89
231 TestMultiNode/serial/AddNode 41.65
232 TestMultiNode/serial/MultiNodeLabels 0.07
233 TestMultiNode/serial/ProfileList 0.46
234 TestMultiNode/serial/CopyFile 6.25
235 TestMultiNode/serial/StopNode 2.2
236 TestMultiNode/serial/StartAfterStop 40.56
237 TestMultiNode/serial/RestartKeepsNodes 293.59
238 TestMultiNode/serial/DeleteNode 2.66
239 TestMultiNode/serial/StopMultiNode 172.6
240 TestMultiNode/serial/RestartMultiNode 83.85
241 TestMultiNode/serial/ValidateNameConflict 40.21
248 TestScheduledStopUnix 108.88
252 TestRunningBinaryUpgrade 150.6
254 TestKubernetesUpgrade 140.24
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 78.45
259 TestNoKubernetes/serial/StartWithStopK8s 51.27
260 TestNoKubernetes/serial/Start 46.81
268 TestNetworkPlugins/group/false 4.47
272 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
274 TestNoKubernetes/serial/ProfileList 0.86
275 TestStoppedBinaryUpgrade/Setup 0.46
276 TestStoppedBinaryUpgrade/Upgrade 129.66
277 TestNoKubernetes/serial/Stop 1.29
278 TestNoKubernetes/serial/StartNoArgs 54.2
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
281 TestPause/serial/Start 64.14
290 TestISOImage/Setup 20.02
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
292 TestNetworkPlugins/group/auto/Start 72.97
294 TestISOImage/Binaries/crictl 0.21
295 TestISOImage/Binaries/curl 0.2
296 TestISOImage/Binaries/docker 0.21
297 TestISOImage/Binaries/git 0.18
298 TestISOImage/Binaries/iptables 0.21
299 TestISOImage/Binaries/podman 0.2
300 TestISOImage/Binaries/rsync 0.21
301 TestISOImage/Binaries/socat 0.2
302 TestISOImage/Binaries/wget 0.2
303 TestISOImage/Binaries/VBoxControl 0.19
304 TestISOImage/Binaries/VBoxService 0.2
305 TestNetworkPlugins/group/kindnet/Start 86.27
306 TestNetworkPlugins/group/calico/Start 100.87
307 TestNetworkPlugins/group/auto/KubeletFlags 0.19
308 TestNetworkPlugins/group/auto/NetCatPod 12.27
309 TestNetworkPlugins/group/auto/DNS 0.18
310 TestNetworkPlugins/group/auto/Localhost 0.15
311 TestNetworkPlugins/group/auto/HairPin 0.16
312 TestNetworkPlugins/group/custom-flannel/Start 80.75
313 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
314 TestNetworkPlugins/group/enable-default-cni/Start 72.41
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
316 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
317 TestNetworkPlugins/group/kindnet/DNS 0.2
318 TestNetworkPlugins/group/kindnet/Localhost 0.44
319 TestNetworkPlugins/group/kindnet/HairPin 0.28
320 TestNetworkPlugins/group/flannel/Start 84.73
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/calico/KubeletFlags 0.21
323 TestNetworkPlugins/group/calico/NetCatPod 12.29
324 TestNetworkPlugins/group/calico/DNS 0.21
325 TestNetworkPlugins/group/calico/Localhost 0.18
326 TestNetworkPlugins/group/calico/HairPin 0.18
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.33
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
331 TestNetworkPlugins/group/bridge/Start 57.72
332 TestNetworkPlugins/group/custom-flannel/DNS 0.16
333 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
334 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
339 TestStartStop/group/old-k8s-version/serial/FirstStart 61.99
341 TestStartStop/group/no-preload/serial/FirstStart 96.52
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
344 TestNetworkPlugins/group/flannel/NetCatPod 13.34
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
346 TestNetworkPlugins/group/bridge/NetCatPod 10.24
347 TestNetworkPlugins/group/flannel/DNS 0.19
348 TestNetworkPlugins/group/flannel/Localhost 0.15
349 TestNetworkPlugins/group/flannel/HairPin 0.13
350 TestNetworkPlugins/group/bridge/DNS 0.18
351 TestNetworkPlugins/group/bridge/Localhost 0.15
352 TestNetworkPlugins/group/bridge/HairPin 0.16
354 TestStartStop/group/embed-certs/serial/FirstStart 54.83
355 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
357 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.68
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
359 TestStartStop/group/old-k8s-version/serial/Stop 78.94
360 TestStartStop/group/no-preload/serial/DeployApp 11.34
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
362 TestStartStop/group/no-preload/serial/Stop 89.55
363 TestStartStop/group/embed-certs/serial/DeployApp 11.29
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
365 TestStartStop/group/embed-certs/serial/Stop 78.43
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 78.59
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
370 TestStartStop/group/old-k8s-version/serial/SecondStart 40.42
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
372 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
373 TestStartStop/group/no-preload/serial/SecondStart 58.88
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
375 TestStartStop/group/embed-certs/serial/SecondStart 58.38
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/old-k8s-version/serial/Pause 2.72
380 TestStartStop/group/newest-cni/serial/FirstStart 66.13
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 75.65
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
387 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/no-preload/serial/Pause 3.16
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/embed-certs/serial/Pause 3.35
392 TestISOImage/PersistentMounts//data 0.2
393 TestISOImage/PersistentMounts//var/lib/docker 0.2
394 TestISOImage/PersistentMounts//var/lib/cni 0.21
395 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
396 TestISOImage/PersistentMounts//var/lib/minikube 0.21
397 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
398 TestISOImage/PersistentMounts//var/lib/boot2docker 0.21
399 TestISOImage/VersionJSON 0.17
400 TestISOImage/eBPFSupport 0.17
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
403 TestStartStop/group/newest-cni/serial/Stop 10.76
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
405 TestStartStop/group/newest-cni/serial/SecondStart 31.95
406 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
408 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
409 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.83
410 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
413 TestStartStop/group/newest-cni/serial/Pause 2.5
x
+
TestDownloadOnly/v1.28.0/json-events (7.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-530902 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-530902 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.573322876s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 09:37:57.060327  416801 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1115 09:37:57.060434  416801 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-530902
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-530902: exit status 85 (87.76107ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-530902 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-530902 │ jenkins │ v1.37.0 │ 15 Nov 25 09:37 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:37:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:37:49.542582  416813 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:37:49.542950  416813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:37:49.542960  416813 out.go:374] Setting ErrFile to fd 2...
	I1115 09:37:49.542964  416813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:37:49.543148  416813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	W1115 09:37:49.543283  416813 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21894-412813/.minikube/config/config.json: open /home/jenkins/minikube-integration/21894-412813/.minikube/config/config.json: no such file or directory
	I1115 09:37:49.543766  416813 out.go:368] Setting JSON to true
	I1115 09:37:49.544733  416813 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4817,"bootTime":1763194653,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:37:49.544830  416813 start.go:143] virtualization: kvm guest
	I1115 09:37:49.547076  416813 out.go:99] [download-only-530902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1115 09:37:49.547229  416813 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 09:37:49.547275  416813 notify.go:221] Checking for updates...
	I1115 09:37:49.548498  416813 out.go:171] MINIKUBE_LOCATION=21894
	I1115 09:37:49.550286  416813 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:37:49.551987  416813 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 09:37:49.553460  416813 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:37:49.554906  416813 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:37:49.557674  416813 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:37:49.557983  416813 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:37:50.090925  416813 out.go:99] Using the kvm2 driver based on user configuration
	I1115 09:37:50.090979  416813 start.go:309] selected driver: kvm2
	I1115 09:37:50.090989  416813 start.go:930] validating driver "kvm2" against <nil>
	I1115 09:37:50.091319  416813 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:37:50.091806  416813 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1115 09:37:50.092455  416813 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:37:50.092493  416813 cni.go:84] Creating CNI manager for ""
	I1115 09:37:50.092531  416813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:37:50.092541  416813 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1115 09:37:50.092604  416813 start.go:353] cluster config:
	{Name:download-only-530902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-530902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:37:50.092802  416813 iso.go:125] acquiring lock: {Name:mke3d0b50f750b07aabde39a6bc9fa707eafad32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:37:50.094636  416813 out.go:99] Downloading VM boot image ...
	I1115 09:37:50.094710  416813 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21894-412813/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1115 09:37:53.024722  416813 out.go:99] Starting "download-only-530902" primary control-plane node in "download-only-530902" cluster
	I1115 09:37:53.024784  416813 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:37:53.044266  416813 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:37:53.044337  416813 cache.go:65] Caching tarball of preloaded images
	I1115 09:37:53.044531  416813 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:37:53.046379  416813 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1115 09:37:53.046405  416813 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:37:53.076262  416813 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1115 09:37:53.076412  416813 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-530902 host does not exist
	  To start a cluster, run: "minikube start -p download-only-530902"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-530902
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-186898 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-186898 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.20358729s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 09:38:01.671220  416801 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:38:01.671269  416801 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-186898
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-186898: exit status 85 (571.116327ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-530902 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-530902 │ jenkins │ v1.37.0 │ 15 Nov 25 09:37 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 09:37 UTC │ 15 Nov 25 09:37 UTC │
	│ delete  │ -p download-only-530902                                                                                                                                                 │ download-only-530902 │ jenkins │ v1.37.0 │ 15 Nov 25 09:37 UTC │ 15 Nov 25 09:37 UTC │
	│ start   │ -o=json --download-only -p download-only-186898 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-186898 │ jenkins │ v1.37.0 │ 15 Nov 25 09:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:37:57
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:37:57.527243  417031 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:37:57.527560  417031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:37:57.527572  417031 out.go:374] Setting ErrFile to fd 2...
	I1115 09:37:57.527576  417031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:37:57.527817  417031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 09:37:57.528343  417031 out.go:368] Setting JSON to true
	I1115 09:37:57.529291  417031 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4825,"bootTime":1763194653,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:37:57.529413  417031 start.go:143] virtualization: kvm guest
	I1115 09:37:57.531545  417031 out.go:99] [download-only-186898] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:37:57.531739  417031 notify.go:221] Checking for updates...
	I1115 09:37:57.533213  417031 out.go:171] MINIKUBE_LOCATION=21894
	I1115 09:37:57.534835  417031 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:37:57.536243  417031 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 09:37:57.537447  417031 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:37:57.538844  417031 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-186898 host does not exist
	  To start a cluster, run: "minikube start -p download-only-186898"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-186898
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 09:38:02.872288  416801 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-456316 --alsologtostderr --binary-mirror http://127.0.0.1:36499 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-456316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-456316
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (78.53s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-142785 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-142785 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.239006077s)
helpers_test.go:175: Cleaning up "offline-crio-142785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-142785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-142785: (2.292766682s)
--- PASS: TestOffline (78.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-965866
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-965866: exit status 85 (67.267208ms)

                                                
                                                
-- stdout --
	* Profile "addons-965866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-965866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-965866
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-965866: exit status 85 (69.321119ms)

                                                
                                                
-- stdout --
	* Profile "addons-965866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-965866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (127.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-965866 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-965866 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.016562424s)
--- PASS: TestAddons/Setup (127.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-965866 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-965866 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-965866 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-965866 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [70a0ad0e-2065-49de-b086-eb86bff49a67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [70a0ad0e-2065-49de-b086-eb86bff49a67] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004234429s
addons_test.go:694: (dbg) Run:  kubectl --context addons-965866 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-965866 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-965866 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.776325ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-gvxk5" [750caabe-24b3-415a-988c-05ee8b751f39] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003554677s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-85vq7" [2ce95f92-5041-42b4-94d4-70973bc1dea8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004834383s
addons_test.go:392: (dbg) Run:  kubectl --context addons-965866 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-965866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-965866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.718567666s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 ip
2025/11/15 09:40:49 [DEBUG] GET http://192.168.39.252:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.50s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.276301ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-965866
addons_test.go:332: (dbg) Run:  kubectl --context addons-965866 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rj6dr" [733951a7-ce3b-45dc-b6f0-eb2465fb2f28] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006383469s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable inspektor-gadget --alsologtostderr -v=1: (5.780835859s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.558318ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-g7cwg" [8a8c0a7b-40cd-4f4f-9186-829a7d7c3c14] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00476582s
addons_test.go:463: (dbg) Run:  kubectl --context addons-965866 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1115 09:40:36.796686  416801 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 09:40:36.826741  416801 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:40:36.826778  416801 kapi.go:107] duration metric: took 30.132557ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 30.146453ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-965866 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-965866 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [42117f19-379d-4177-9717-c1ca4714c769] Pending
helpers_test.go:352: "task-pv-pod" [42117f19-379d-4177-9717-c1ca4714c769] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [42117f19-379d-4177-9717-c1ca4714c769] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.006656389s
addons_test.go:572: (dbg) Run:  kubectl --context addons-965866 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-965866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-965866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-965866 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-965866 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-965866 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-965866 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [249313df-373d-4f47-8ff0-f832a35fcc80] Pending
helpers_test.go:352: "task-pv-pod-restore" [249313df-373d-4f47-8ff0-f832a35fcc80] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [249313df-373d-4f47-8ff0-f832a35fcc80] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005981371s
addons_test.go:614: (dbg) Run:  kubectl --context addons-965866 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-965866 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-965866 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.811664205s)
--- PASS: TestAddons/parallel/CSI (61.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-965866 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-965866 --alsologtostderr -v=1: (1.185634189s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-rg86j" [609ec5d0-3fdc-4b89-bdfc-79394e2fe13f] Pending
helpers_test.go:352: "headlamp-6945c6f4d-rg86j" [609ec5d0-3fdc-4b89-bdfc-79394e2fe13f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-rg86j" [609ec5d0-3fdc-4b89-bdfc-79394e2fe13f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.009811099s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable headlamp --alsologtostderr -v=1: (6.10558565s)
--- PASS: TestAddons/parallel/Headlamp (22.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-qbl4d" [883d3e62-29e6-4f1c-824c-af7462ee148c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005222952s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-965866 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-965866 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2aec9da1-8974-4942-acac-534e1119e32a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2aec9da1-8974-4942-acac-534e1119e32a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2aec9da1-8974-4942-acac-534e1119e32a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004806113s
addons_test.go:967: (dbg) Run:  kubectl --context addons-965866 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 ssh "cat /opt/local-path-provisioner/pvc-453b0945-0433-401e-a86e-37483ec44b20_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-965866 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-965866 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.092620536s)
--- PASS: TestAddons/parallel/LocalPath (56.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-xk524" [4e750b0a-f108-442f-b1dc-a91663709ffd] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004360411s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zhdn5" [29d64ce9-b4b7-46c0-8edb-0d2288857589] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004076254s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965866 addons disable yakd --alsologtostderr -v=1: (5.75183597s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-965866
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-965866: (1m27.15996545s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-965866
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-965866
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-965866
--- PASS: TestAddons/StoppedEnableDisable (87.38s)

                                                
                                    
x
+
TestCertOptions (57.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-636664 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-636664 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (55.525123799s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-636664 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-636664 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-636664 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-636664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-636664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-636664: (1.826076s)
--- PASS: TestCertOptions (57.78s)

                                                
                                    
x
+
TestCertExpiration (319.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-506364 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-506364 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.512253594s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-506364 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-506364 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m17.840061094s)
helpers_test.go:175: Cleaning up "cert-expiration-506364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-506364
--- PASS: TestCertExpiration (319.27s)

                                                
                                    
x
+
TestForceSystemdFlag (77.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-348004 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-348004 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.51159358s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-348004 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-348004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-348004
--- PASS: TestForceSystemdFlag (77.63s)

                                                
                                    
x
+
TestForceSystemdEnv (41.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-177950 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-177950 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.66371852s)
helpers_test.go:175: Cleaning up "force-systemd-env-177950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-177950
--- PASS: TestForceSystemdEnv (41.65s)

                                                
                                    
x
+
TestErrorSpam/setup (35.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-814775 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-814775 --driver=kvm2  --container-runtime=crio
E1115 09:45:11.290915  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.297415  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.308851  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.330285  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.371827  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.453372  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.615015  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:11.936814  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:12.578886  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:13.860559  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:16.422854  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:21.544458  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:31.786424  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-814775 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-814775 --driver=kvm2  --container-runtime=crio: (35.203284891s)
--- PASS: TestErrorSpam/setup (35.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (4.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 stop: (2.095258829s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 stop: (1.282170876s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-814775 --log_dir /tmp/nospam-814775 stop: (1.540111071s)
--- PASS: TestErrorSpam/stop (4.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21894-412813/.minikube/files/etc/test/nested/copy/416801/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430000 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1115 09:45:52.267850  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:46:33.229499  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-430000 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.41205129s)
--- PASS: TestFunctional/serial/StartWithProxy (56.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 09:46:45.624798  416801 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430000 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-430000 --alsologtostderr -v=8: (34.681489348s)
functional_test.go:678: soft start took 34.682342486s for "functional-430000" cluster.
I1115 09:47:20.306617  416801 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (34.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-430000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 cache add registry.k8s.io/pause:3.1: (1.124495397s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 cache add registry.k8s.io/pause:3.3: (1.159588785s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 cache add registry.k8s.io/pause:latest: (1.13578643s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-430000 /tmp/TestFunctionalserialCacheCmdcacheadd_local1735846073/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cache add minikube-local-cache-test:functional-430000
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 cache add minikube-local-cache-test:functional-430000: (1.706670517s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cache delete minikube-local-cache-test:functional-430000
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-430000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (189.665574ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 cache reload: (1.059982836s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 kubectl -- --context functional-430000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-430000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1115 09:47:55.151100  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-430000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.19070117s)
functional_test.go:776: restart took 36.190873881s for "functional-430000" cluster.
I1115 09:48:04.563001  416801 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-430000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 logs: (1.42785111s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 logs --file /tmp/TestFunctionalserialLogsFileCmd1315153405/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 logs --file /tmp/TestFunctionalserialLogsFileCmd1315153405/001/logs.txt: (1.435764533s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-430000
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-430000: exit status 115 (247.068887ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.154:30733 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-430000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 config get cpus: exit status 14 (75.747887ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 config get cpus: exit status 14 (74.316257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-430000 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-430000 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 422204: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.551432ms)

                                                
                                                
-- stdout --
	* [functional-430000] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:48:13.280716  422120 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:48:13.280820  422120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:48:13.280828  422120 out.go:374] Setting ErrFile to fd 2...
	I1115 09:48:13.280833  422120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:48:13.281065  422120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 09:48:13.281482  422120 out.go:368] Setting JSON to false
	I1115 09:48:13.282414  422120 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5440,"bootTime":1763194653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:48:13.282514  422120 start.go:143] virtualization: kvm guest
	I1115 09:48:13.284421  422120 out.go:179] * [functional-430000] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:48:13.286285  422120 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:48:13.286307  422120 notify.go:221] Checking for updates...
	I1115 09:48:13.288585  422120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:48:13.289959  422120 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 09:48:13.291161  422120 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:48:13.292480  422120 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:48:13.295608  422120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:48:13.297706  422120 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:48:13.298355  422120 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:48:13.339197  422120 out.go:179] * Using the kvm2 driver based on existing profile
	I1115 09:48:13.341092  422120 start.go:309] selected driver: kvm2
	I1115 09:48:13.341114  422120 start.go:930] validating driver "kvm2" against &{Name:functional-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:48:13.341246  422120 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:48:13.343405  422120 out.go:203] 
	W1115 09:48:13.344569  422120 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 09:48:13.345687  422120 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430000 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.099681ms)

                                                
                                                
-- stdout --
	* [functional-430000] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:48:13.142585  422104 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:48:13.142794  422104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:48:13.142810  422104 out.go:374] Setting ErrFile to fd 2...
	I1115 09:48:13.142815  422104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:48:13.143193  422104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 09:48:13.143698  422104 out.go:368] Setting JSON to false
	I1115 09:48:13.144718  422104 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5440,"bootTime":1763194653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:48:13.144860  422104 start.go:143] virtualization: kvm guest
	I1115 09:48:13.148501  422104 out.go:179] * [functional-430000] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:48:13.149969  422104 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:48:13.149983  422104 notify.go:221] Checking for updates...
	I1115 09:48:13.152286  422104 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:48:13.153843  422104 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 09:48:13.154958  422104 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 09:48:13.156123  422104 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:48:13.157380  422104 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:48:13.159231  422104 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:48:13.159873  422104 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:48:13.196353  422104 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1115 09:48:13.197750  422104 start.go:309] selected driver: kvm2
	I1115 09:48:13.197784  422104 start.go:930] validating driver "kvm2" against &{Name:functional-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:48:13.197967  422104 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:48:13.200212  422104 out.go:203] 
	W1115 09:48:13.201633  422104 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:48:13.203089  422104 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-430000 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-430000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zr4lj" [10bbcb54-aa00-4faa-ba8a-d2778ea667fa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/11/15 09:48:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "hello-node-connect-7d85dfc575-zr4lj" [10bbcb54-aa00-4faa-ba8a-d2778ea667fa] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.018778613s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.154:32765
functional_test.go:1680: http://192.168.39.154:32765: success! body:
Request served by hello-node-connect-7d85dfc575-zr4lj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.154:32765
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e65174ab-a31c-485b-8eaa-92b032135f06] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007483643s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-430000 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-430000 get pvc myclaim -o=json
I1115 09:48:20.162571  416801 retry.go:31] will retry after 1.756263523s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1ca599b4-18fa-42b3-bc7b-da918e18754e ResourceVersion:785 Generation:0 CreationTimestamp:2025-11-15 09:48:20 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00165ccc0 VolumeMode:0xc00165ccd0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-430000 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6ac983d1-3345-42fa-8bda-ab33a44232e6] Pending
helpers_test.go:352: "sp-pod" [6ac983d1-3345-42fa-8bda-ab33a44232e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6ac983d1-3345-42fa-8bda-ab33a44232e6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.005909257s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-430000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-430000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-430000 delete -f testdata/storage-provisioner/pod.yaml: (4.119174497s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:48:44.279789  416801 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [93ba22cc-b41c-4529-8cd2-09a89037d5c5] Pending
helpers_test.go:352: "sp-pod" [93ba22cc-b41c-4529-8cd2-09a89037d5c5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [93ba22cc-b41c-4529-8cd2-09a89037d5c5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004705577s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-430000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh -n functional-430000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cp functional-430000:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4038788485/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh -n functional-430000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh -n functional-430000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-430000 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-48rps" [34c9f322-486b-48f5-9ada-c0f4d8132206] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-48rps" [34c9f322-486b-48f5-9ada-c0f4d8132206] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003735853s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-430000 exec mysql-5bb876957f-48rps -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-430000 exec mysql-5bb876957f-48rps -- mysql -ppassword -e "show databases;": exit status 1 (121.229952ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:48:56.225945  416801 retry.go:31] will retry after 1.325460771s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-430000 exec mysql-5bb876957f-48rps -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-430000 exec mysql-5bb876957f-48rps -- mysql -ppassword -e "show databases;": exit status 1 (173.157547ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:48:57.725371  416801 retry.go:31] will retry after 1.958997177s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-430000 exec mysql-5bb876957f-48rps -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/416801/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /etc/test/nested/copy/416801/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/416801.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/416801.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/416801.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /usr/share/ca-certificates/416801.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4168012.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/4168012.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4168012.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /usr/share/ca-certificates/4168012.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-430000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh "sudo systemctl is-active docker": exit status 1 (193.572184ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh "sudo systemctl is-active containerd": exit status 1 (227.907181ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-430000 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-430000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zlcj4" [e6e20e2b-8e44-414a-915b-80e3a10068e1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-zlcj4" [e6e20e2b-8e44-414a-915b-80e3a10068e1] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005308485s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "388.682654ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.059857ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdany-port2707316477/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763200092281923972" to /tmp/TestFunctionalparallelMountCmdany-port2707316477/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763200092281923972" to /tmp/TestFunctionalparallelMountCmdany-port2707316477/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763200092281923972" to /tmp/TestFunctionalparallelMountCmdany-port2707316477/001/test-1763200092281923972
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.240949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:48:12.513514  416801 retry.go:31] will retry after 350.428948ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 09:48 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 09:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 09:48 test-1763200092281923972
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh cat /mount-9p/test-1763200092281923972
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-430000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [25147b36-1e4e-4dd7-a6c9-682ac3a2f019] Pending
helpers_test.go:352: "busybox-mount" [25147b36-1e4e-4dd7-a6c9-682ac3a2f019] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [25147b36-1e4e-4dd7-a6c9-682ac3a2f019] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [25147b36-1e4e-4dd7-a6c9-682ac3a2f019] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004966894s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-430000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdany-port2707316477/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "277.555072ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.316012ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdspecific-port3783757700/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.227316ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:48:21.631716  416801 retry.go:31] will retry after 359.37464ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
I1115 09:48:22.147162  416801 detect.go:223] nested VM detected
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdspecific-port3783757700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh "sudo umount -f /mount-9p": exit status 1 (204.896166ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-430000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdspecific-port3783757700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 service list -o json
functional_test.go:1504: Took "485.968314ms" to run "out/minikube-linux-amd64 -p functional-430000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.154:31043
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.154:31043
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4023100985/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4023100985/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4023100985/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T" /mount1: exit status 1 (262.340709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:48:23.097349  416801 retry.go:31] will retry after 652.950456ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-430000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4023100985/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4023100985/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-430000 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4023100985/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-430000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-430000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-430000 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-430000 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 422507: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-430000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [55de873f-8baa-496d-aaf1-e77788518da7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [55de873f-8baa-496d-aaf1-e77788518da7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.005209493s
I1115 09:48:41.399120  416801 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-430000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.212.73 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-430000 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-430000
localhost/kicbase/echo-server:functional-430000
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430000 image ls --format short --alsologtostderr:
I1115 09:48:53.959332  423216 out.go:360] Setting OutFile to fd 1 ...
I1115 09:48:53.959445  423216 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:53.959456  423216 out.go:374] Setting ErrFile to fd 2...
I1115 09:48:53.959462  423216 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:53.959788  423216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
I1115 09:48:53.960576  423216 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:53.960759  423216 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:53.963374  423216 ssh_runner.go:195] Run: systemctl --version
I1115 09:48:53.966243  423216 main.go:143] libmachine: domain functional-430000 has defined MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:53.966752  423216 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:90:ea", ip: ""} in network mk-functional-430000: {Iface:virbr1 ExpiryTime:2025-11-15 10:46:04 +0000 UTC Type:0 Mac:52:54:00:fc:90:ea Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-430000 Clientid:01:52:54:00:fc:90:ea}
I1115 09:48:53.966796  423216 main.go:143] libmachine: domain functional-430000 has defined IP address 192.168.39.154 and MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:53.967000  423216 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/functional-430000/id_rsa Username:docker}
I1115 09:48:54.082031  423216 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430000 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ localhost/minikube-local-cache-test     │ functional-430000  │ 7a41e7c33675f │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-430000  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430000 image ls --format table --alsologtostderr:
I1115 09:48:54.468474  423268 out.go:360] Setting OutFile to fd 1 ...
I1115 09:48:54.468617  423268 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:54.468630  423268 out.go:374] Setting ErrFile to fd 2...
I1115 09:48:54.468636  423268 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:54.468895  423268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
I1115 09:48:54.469479  423268 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:54.469576  423268 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:54.471595  423268 ssh_runner.go:195] Run: systemctl --version
I1115 09:48:54.473775  423268 main.go:143] libmachine: domain functional-430000 has defined MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:54.474205  423268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:90:ea", ip: ""} in network mk-functional-430000: {Iface:virbr1 ExpiryTime:2025-11-15 10:46:04 +0000 UTC Type:0 Mac:52:54:00:fc:90:ea Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-430000 Clientid:01:52:54:00:fc:90:ea}
I1115 09:48:54.474233  423268 main.go:143] libmachine: domain functional-430000 has defined IP address 192.168.39.154 and MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:54.474352  423268 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/functional-430000/id_rsa Username:docker}
I1115 09:48:54.572569  423268 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430000 image ls --format json --alsologtostderr:
[{"id":"7a41e7c33675fa1e9c1f1185294333b819db65a165faa7969dbba9f542d9b744","repoDigests":["localhost/minikube-local-cache-test@sha256:de02367ff51efa7faf01fb38e100a57d7a61cd668bfac9cba8f2cd8cc4164bd2"],"repoTags":["localhost/minikube-local-cache-test:functional-430000"],"size":"3330"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size
":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77e
b5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-430000"],"size":"4944818"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["
docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","rep
oDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["
registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b0
4c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad
751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430000 image ls --format json --alsologtostderr:
I1115 09:48:54.234509  423238 out.go:360] Setting OutFile to fd 1 ...
I1115 09:48:54.234820  423238 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:54.234831  423238 out.go:374] Setting ErrFile to fd 2...
I1115 09:48:54.234836  423238 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:54.235026  423238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
I1115 09:48:54.235601  423238 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:54.235729  423238 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:54.237625  423238 ssh_runner.go:195] Run: systemctl --version
I1115 09:48:54.240034  423238 main.go:143] libmachine: domain functional-430000 has defined MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:54.240456  423238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:90:ea", ip: ""} in network mk-functional-430000: {Iface:virbr1 ExpiryTime:2025-11-15 10:46:04 +0000 UTC Type:0 Mac:52:54:00:fc:90:ea Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-430000 Clientid:01:52:54:00:fc:90:ea}
I1115 09:48:54.240484  423238 main.go:143] libmachine: domain functional-430000 has defined IP address 192.168.39.154 and MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:54.240629  423238 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/functional-430000/id_rsa Username:docker}
I1115 09:48:54.331348  423238 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430000 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-430000
size: "4944818"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7a41e7c33675fa1e9c1f1185294333b819db65a165faa7969dbba9f542d9b744
repoDigests:
- localhost/minikube-local-cache-test@sha256:de02367ff51efa7faf01fb38e100a57d7a61cd668bfac9cba8f2cd8cc4164bd2
repoTags:
- localhost/minikube-local-cache-test:functional-430000
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430000 image ls --format yaml --alsologtostderr:
I1115 09:48:53.957084  423217 out.go:360] Setting OutFile to fd 1 ...
I1115 09:48:53.957460  423217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:53.957526  423217 out.go:374] Setting ErrFile to fd 2...
I1115 09:48:53.957539  423217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:53.958040  423217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
I1115 09:48:53.959114  423217 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:53.959268  423217 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:53.962410  423217 ssh_runner.go:195] Run: systemctl --version
I1115 09:48:53.965303  423217 main.go:143] libmachine: domain functional-430000 has defined MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:53.965727  423217 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:90:ea", ip: ""} in network mk-functional-430000: {Iface:virbr1 ExpiryTime:2025-11-15 10:46:04 +0000 UTC Type:0 Mac:52:54:00:fc:90:ea Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-430000 Clientid:01:52:54:00:fc:90:ea}
I1115 09:48:53.965766  423217 main.go:143] libmachine: domain functional-430000 has defined IP address 192.168.39.154 and MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:53.965953  423217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/functional-430000/id_rsa Username:docker}
I1115 09:48:54.062174  423217 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-430000 ssh pgrep buildkitd: exit status 1 (178.412322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr: (3.231375616s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 79bc7fbd77b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-430000
--> 11f19817217
Successfully tagged localhost/my-image:functional-430000
11f19817217e6bc942ba4792ccd0c1fef76c5f798e1aa6e1a9e138cc245bd318
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr:
I1115 09:48:54.399460  423258 out.go:360] Setting OutFile to fd 1 ...
I1115 09:48:54.399645  423258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:54.399673  423258 out.go:374] Setting ErrFile to fd 2...
I1115 09:48:54.399681  423258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:48:54.400023  423258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
I1115 09:48:54.400879  423258 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:54.401696  423258 config.go:182] Loaded profile config "functional-430000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:48:54.404224  423258 ssh_runner.go:195] Run: systemctl --version
I1115 09:48:54.406440  423258 main.go:143] libmachine: domain functional-430000 has defined MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:54.406892  423258 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:90:ea", ip: ""} in network mk-functional-430000: {Iface:virbr1 ExpiryTime:2025-11-15 10:46:04 +0000 UTC Type:0 Mac:52:54:00:fc:90:ea Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-430000 Clientid:01:52:54:00:fc:90:ea}
I1115 09:48:54.406921  423258 main.go:143] libmachine: domain functional-430000 has defined IP address 192.168.39.154 and MAC address 52:54:00:fc:90:ea in network mk-functional-430000
I1115 09:48:54.407070  423258 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/functional-430000/id_rsa Username:docker}
I1115 09:48:54.506476  423258 build_images.go:162] Building image from path: /tmp/build.1755293263.tar
I1115 09:48:54.506558  423258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 09:48:54.528745  423258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1755293263.tar
I1115 09:48:54.535816  423258 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1755293263.tar: stat -c "%s %y" /var/lib/minikube/build/build.1755293263.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1755293263.tar': No such file or directory
I1115 09:48:54.535850  423258 ssh_runner.go:362] scp /tmp/build.1755293263.tar --> /var/lib/minikube/build/build.1755293263.tar (3072 bytes)
I1115 09:48:54.589470  423258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1755293263
I1115 09:48:54.617803  423258 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1755293263 -xf /var/lib/minikube/build/build.1755293263.tar
I1115 09:48:54.634672  423258 crio.go:315] Building image: /var/lib/minikube/build/build.1755293263
I1115 09:48:54.634762  423258 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-430000 /var/lib/minikube/build/build.1755293263 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1115 09:48:57.531129  423258 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-430000 /var/lib/minikube/build/build.1755293263 --cgroup-manager=cgroupfs: (2.896320296s)
I1115 09:48:57.531217  423258 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1755293263
I1115 09:48:57.544751  423258 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1755293263.tar
I1115 09:48:57.557550  423258 build_images.go:218] Built localhost/my-image:functional-430000 from /tmp/build.1755293263.tar
I1115 09:48:57.557586  423258 build_images.go:134] succeeded building to: functional-430000
I1115 09:48:57.557591  423258 build_images.go:135] failed building to: 
W1115 09:48:57.562333  423258 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 157609ad-3fd5-4c41-9dd1-26519a5753e9
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.534963568s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-430000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr: (5.183651517s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-430000
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image save kicbase/echo-server:functional-430000 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image rm kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-430000
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-430000 image save --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-430000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-430000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-430000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-430000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1115 09:50:11.284647  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:50:38.993357  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m22.569179593s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 kubectl -- rollout status deployment/busybox: (5.153278665s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-bvnwg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-cfwkt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-v2cgt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-bvnwg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-cfwkt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-v2cgt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-bvnwg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-cfwkt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-v2cgt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-bvnwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-bvnwg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-cfwkt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-cfwkt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-v2cgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 kubectl -- exec busybox-7b57f96db7-v2cgt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node add --alsologtostderr -v 5
E1115 09:53:11.986041  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:11.992560  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:12.004050  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:12.025523  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:12.067003  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:12.148587  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:12.310264  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:12.632063  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:13.273900  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:14.555613  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:17.118129  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 node add --alsologtostderr -v 5: (45.622776292s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-505457 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp testdata/cp-test.txt ha-505457:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3498890578/001/cp-test_ha-505457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457:/home/docker/cp-test.txt ha-505457-m02:/home/docker/cp-test_ha-505457_ha-505457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test_ha-505457_ha-505457-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457:/home/docker/cp-test.txt ha-505457-m03:/home/docker/cp-test_ha-505457_ha-505457-m03.txt
E1115 09:53:22.240216  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test_ha-505457_ha-505457-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457:/home/docker/cp-test.txt ha-505457-m04:/home/docker/cp-test_ha-505457_ha-505457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test_ha-505457_ha-505457-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp testdata/cp-test.txt ha-505457-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3498890578/001/cp-test_ha-505457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m02:/home/docker/cp-test.txt ha-505457:/home/docker/cp-test_ha-505457-m02_ha-505457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test_ha-505457-m02_ha-505457.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m02:/home/docker/cp-test.txt ha-505457-m03:/home/docker/cp-test_ha-505457-m02_ha-505457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test_ha-505457-m02_ha-505457-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m02:/home/docker/cp-test.txt ha-505457-m04:/home/docker/cp-test_ha-505457-m02_ha-505457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test_ha-505457-m02_ha-505457-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp testdata/cp-test.txt ha-505457-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3498890578/001/cp-test_ha-505457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m03:/home/docker/cp-test.txt ha-505457:/home/docker/cp-test_ha-505457-m03_ha-505457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test_ha-505457-m03_ha-505457.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m03:/home/docker/cp-test.txt ha-505457-m02:/home/docker/cp-test_ha-505457-m03_ha-505457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test_ha-505457-m03_ha-505457-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m03:/home/docker/cp-test.txt ha-505457-m04:/home/docker/cp-test_ha-505457-m03_ha-505457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test_ha-505457-m03_ha-505457-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp testdata/cp-test.txt ha-505457-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3498890578/001/cp-test_ha-505457-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m04:/home/docker/cp-test.txt ha-505457:/home/docker/cp-test_ha-505457-m04_ha-505457.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457 "sudo cat /home/docker/cp-test_ha-505457-m04_ha-505457.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m04:/home/docker/cp-test.txt ha-505457-m02:/home/docker/cp-test_ha-505457-m04_ha-505457-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m02 "sudo cat /home/docker/cp-test_ha-505457-m04_ha-505457-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 cp ha-505457-m04:/home/docker/cp-test.txt ha-505457-m03:/home/docker/cp-test_ha-505457-m04_ha-505457-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 ssh -n ha-505457-m03 "sudo cat /home/docker/cp-test_ha-505457-m04_ha-505457-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (85.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node stop m02 --alsologtostderr -v 5
E1115 09:53:32.481631  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:53:52.963786  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:54:33.927014  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 node stop m02 --alsologtostderr -v 5: (1m25.00703191s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5: exit status 7 (505.972232ms)

                                                
                                                
-- stdout --
	ha-505457
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505457-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-505457-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-505457-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:54:56.252783  426311 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:54:56.253055  426311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:54:56.253063  426311 out.go:374] Setting ErrFile to fd 2...
	I1115 09:54:56.253067  426311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:54:56.253264  426311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 09:54:56.253423  426311 out.go:368] Setting JSON to false
	I1115 09:54:56.253456  426311 mustload.go:66] Loading cluster: ha-505457
	I1115 09:54:56.253569  426311 notify.go:221] Checking for updates...
	I1115 09:54:56.253848  426311 config.go:182] Loaded profile config "ha-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:54:56.253865  426311 status.go:174] checking status of ha-505457 ...
	I1115 09:54:56.256031  426311 status.go:371] ha-505457 host status = "Running" (err=<nil>)
	I1115 09:54:56.256050  426311 host.go:66] Checking if "ha-505457" exists ...
	I1115 09:54:56.258495  426311 main.go:143] libmachine: domain ha-505457 has defined MAC address 52:54:00:ab:da:9b in network mk-ha-505457
	I1115 09:54:56.258998  426311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:da:9b", ip: ""} in network mk-ha-505457: {Iface:virbr1 ExpiryTime:2025-11-15 10:49:16 +0000 UTC Type:0 Mac:52:54:00:ab:da:9b Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-505457 Clientid:01:52:54:00:ab:da:9b}
	I1115 09:54:56.259031  426311 main.go:143] libmachine: domain ha-505457 has defined IP address 192.168.39.188 and MAC address 52:54:00:ab:da:9b in network mk-ha-505457
	I1115 09:54:56.259178  426311 host.go:66] Checking if "ha-505457" exists ...
	I1115 09:54:56.259370  426311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:54:56.261744  426311 main.go:143] libmachine: domain ha-505457 has defined MAC address 52:54:00:ab:da:9b in network mk-ha-505457
	I1115 09:54:56.262293  426311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:da:9b", ip: ""} in network mk-ha-505457: {Iface:virbr1 ExpiryTime:2025-11-15 10:49:16 +0000 UTC Type:0 Mac:52:54:00:ab:da:9b Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-505457 Clientid:01:52:54:00:ab:da:9b}
	I1115 09:54:56.262332  426311 main.go:143] libmachine: domain ha-505457 has defined IP address 192.168.39.188 and MAC address 52:54:00:ab:da:9b in network mk-ha-505457
	I1115 09:54:56.262551  426311 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/ha-505457/id_rsa Username:docker}
	I1115 09:54:56.347781  426311 ssh_runner.go:195] Run: systemctl --version
	I1115 09:54:56.355538  426311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:54:56.374307  426311 kubeconfig.go:125] found "ha-505457" server: "https://192.168.39.254:8443"
	I1115 09:54:56.374352  426311 api_server.go:166] Checking apiserver status ...
	I1115 09:54:56.374395  426311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:54:56.396496  426311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	W1115 09:54:56.408479  426311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:54:56.408553  426311 ssh_runner.go:195] Run: ls
	I1115 09:54:56.413492  426311 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1115 09:54:56.418457  426311 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1115 09:54:56.418485  426311 status.go:463] ha-505457 apiserver status = Running (err=<nil>)
	I1115 09:54:56.418496  426311 status.go:176] ha-505457 status: &{Name:ha-505457 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:54:56.418522  426311 status.go:174] checking status of ha-505457-m02 ...
	I1115 09:54:56.420306  426311 status.go:371] ha-505457-m02 host status = "Stopped" (err=<nil>)
	I1115 09:54:56.420333  426311 status.go:384] host is not running, skipping remaining checks
	I1115 09:54:56.420341  426311 status.go:176] ha-505457-m02 status: &{Name:ha-505457-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:54:56.420360  426311 status.go:174] checking status of ha-505457-m03 ...
	I1115 09:54:56.421711  426311 status.go:371] ha-505457-m03 host status = "Running" (err=<nil>)
	I1115 09:54:56.421729  426311 host.go:66] Checking if "ha-505457-m03" exists ...
	I1115 09:54:56.424124  426311 main.go:143] libmachine: domain ha-505457-m03 has defined MAC address 52:54:00:a9:a4:7f in network mk-ha-505457
	I1115 09:54:56.424480  426311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:a4:7f", ip: ""} in network mk-ha-505457: {Iface:virbr1 ExpiryTime:2025-11-15 10:51:22 +0000 UTC Type:0 Mac:52:54:00:a9:a4:7f Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-505457-m03 Clientid:01:52:54:00:a9:a4:7f}
	I1115 09:54:56.424498  426311 main.go:143] libmachine: domain ha-505457-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:a9:a4:7f in network mk-ha-505457
	I1115 09:54:56.424677  426311 host.go:66] Checking if "ha-505457-m03" exists ...
	I1115 09:54:56.424914  426311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:54:56.427950  426311 main.go:143] libmachine: domain ha-505457-m03 has defined MAC address 52:54:00:a9:a4:7f in network mk-ha-505457
	I1115 09:54:56.428444  426311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:a4:7f", ip: ""} in network mk-ha-505457: {Iface:virbr1 ExpiryTime:2025-11-15 10:51:22 +0000 UTC Type:0 Mac:52:54:00:a9:a4:7f Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-505457-m03 Clientid:01:52:54:00:a9:a4:7f}
	I1115 09:54:56.428477  426311 main.go:143] libmachine: domain ha-505457-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:a9:a4:7f in network mk-ha-505457
	I1115 09:54:56.428616  426311 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/ha-505457-m03/id_rsa Username:docker}
	I1115 09:54:56.513244  426311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:54:56.533467  426311 kubeconfig.go:125] found "ha-505457" server: "https://192.168.39.254:8443"
	I1115 09:54:56.533504  426311 api_server.go:166] Checking apiserver status ...
	I1115 09:54:56.533543  426311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:54:56.555834  426311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1748/cgroup
	W1115 09:54:56.569133  426311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1748/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:54:56.569233  426311 ssh_runner.go:195] Run: ls
	I1115 09:54:56.575439  426311 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1115 09:54:56.580566  426311 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1115 09:54:56.580593  426311 status.go:463] ha-505457-m03 apiserver status = Running (err=<nil>)
	I1115 09:54:56.580606  426311 status.go:176] ha-505457-m03 status: &{Name:ha-505457-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:54:56.580630  426311 status.go:174] checking status of ha-505457-m04 ...
	I1115 09:54:56.582812  426311 status.go:371] ha-505457-m04 host status = "Running" (err=<nil>)
	I1115 09:54:56.582843  426311 host.go:66] Checking if "ha-505457-m04" exists ...
	I1115 09:54:56.585840  426311 main.go:143] libmachine: domain ha-505457-m04 has defined MAC address 52:54:00:c7:5e:37 in network mk-ha-505457
	I1115 09:54:56.586307  426311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c7:5e:37", ip: ""} in network mk-ha-505457: {Iface:virbr1 ExpiryTime:2025-11-15 10:52:49 +0000 UTC Type:0 Mac:52:54:00:c7:5e:37 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-505457-m04 Clientid:01:52:54:00:c7:5e:37}
	I1115 09:54:56.586338  426311 main.go:143] libmachine: domain ha-505457-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:c7:5e:37 in network mk-ha-505457
	I1115 09:54:56.586505  426311 host.go:66] Checking if "ha-505457-m04" exists ...
	I1115 09:54:56.586792  426311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:54:56.589155  426311 main.go:143] libmachine: domain ha-505457-m04 has defined MAC address 52:54:00:c7:5e:37 in network mk-ha-505457
	I1115 09:54:56.589595  426311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c7:5e:37", ip: ""} in network mk-ha-505457: {Iface:virbr1 ExpiryTime:2025-11-15 10:52:49 +0000 UTC Type:0 Mac:52:54:00:c7:5e:37 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-505457-m04 Clientid:01:52:54:00:c7:5e:37}
	I1115 09:54:56.589632  426311 main.go:143] libmachine: domain ha-505457-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:c7:5e:37 in network mk-ha-505457
	I1115 09:54:56.589836  426311 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/ha-505457-m04/id_rsa Username:docker}
	I1115 09:54:56.672468  426311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:54:56.690112  426311 status.go:176] ha-505457-m04 status: &{Name:ha-505457-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (85.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node start m02 --alsologtostderr -v 5
E1115 09:55:11.284235  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 node start m02 --alsologtostderr -v 5: (35.313038946s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 stop --alsologtostderr -v 5
E1115 09:55:55.848585  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:58:11.985903  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:58:39.693359  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 stop --alsologtostderr -v 5: (3m57.66408179s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 start --wait true --alsologtostderr -v 5
E1115 10:00:11.284787  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 start --wait true --alsologtostderr -v 5: (1m55.751383361s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node delete m03 --alsologtostderr -v 5
E1115 10:01:34.355807  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 node delete m03 --alsologtostderr -v 5: (17.91302435s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (248.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 stop --alsologtostderr -v 5
E1115 10:03:11.985744  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:05:11.287118  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 stop --alsologtostderr -v 5: (4m8.336397889s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5: exit status 7 (71.312427ms)

                                                
                                                
-- stdout --
	ha-505457
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-505457-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-505457-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:05:55.251857  429452 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:05:55.252148  429452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:05:55.252158  429452 out.go:374] Setting ErrFile to fd 2...
	I1115 10:05:55.252163  429452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:05:55.252367  429452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:05:55.252539  429452 out.go:368] Setting JSON to false
	I1115 10:05:55.252573  429452 mustload.go:66] Loading cluster: ha-505457
	I1115 10:05:55.252719  429452 notify.go:221] Checking for updates...
	I1115 10:05:55.252989  429452 config.go:182] Loaded profile config "ha-505457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:05:55.253010  429452 status.go:174] checking status of ha-505457 ...
	I1115 10:05:55.254933  429452 status.go:371] ha-505457 host status = "Stopped" (err=<nil>)
	I1115 10:05:55.254949  429452 status.go:384] host is not running, skipping remaining checks
	I1115 10:05:55.254956  429452 status.go:176] ha-505457 status: &{Name:ha-505457 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:05:55.254994  429452 status.go:174] checking status of ha-505457-m02 ...
	I1115 10:05:55.256129  429452 status.go:371] ha-505457-m02 host status = "Stopped" (err=<nil>)
	I1115 10:05:55.256144  429452 status.go:384] host is not running, skipping remaining checks
	I1115 10:05:55.256150  429452 status.go:176] ha-505457-m02 status: &{Name:ha-505457-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:05:55.256164  429452 status.go:174] checking status of ha-505457-m04 ...
	I1115 10:05:55.257226  429452 status.go:371] ha-505457-m04 host status = "Stopped" (err=<nil>)
	I1115 10:05:55.257251  429452 status.go:384] host is not running, skipping remaining checks
	I1115 10:05:55.257258  429452 status.go:176] ha-505457-m04 status: &{Name:ha-505457-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (248.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (113.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m53.05229181s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (113.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 node add --control-plane --alsologtostderr -v 5
E1115 10:08:11.986366  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-505457 node add --control-plane --alsologtostderr -v 5: (1m11.292279982s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-505457 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-827304 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1115 10:09:35.055882  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-827304 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (54.456699346s)
--- PASS: TestJSONOutput/start/Command (54.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-827304 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-827304 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-827304 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-827304 --output=json --user=testUser: (7.215065864s)
--- PASS: TestJSONOutput/stop/Command (7.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-025695 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-025695 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (85.489793ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a670316-2bf4-42b0-a1f7-043dc1a99bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-025695] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9cbcb9e-9694-4e74-a2a9-4ff56d77b25c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"a48dbeff-e338-42bb-96cd-31fc1861d5cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2f4614d2-827d-41da-a870-24bf03dcefc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig"}}
	{"specversion":"1.0","id":"b1ba6e39-b927-4629-bd64-72e4df08fa09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube"}}
	{"specversion":"1.0","id":"80c47c89-07b5-409f-8b3f-53a96081ac44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fdb16e1a-b302-45a7-a558-91eb57d2c838","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ede70dc9-00e5-45ca-9e80-2847c32cc023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-025695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-025695
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-473488 --driver=kvm2  --container-runtime=crio
E1115 10:10:11.285292  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-473488 --driver=kvm2  --container-runtime=crio: (36.854382908s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-476565 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-476565 --driver=kvm2  --container-runtime=crio: (37.20521833s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-473488
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-476565
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-476565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-476565
helpers_test.go:175: Cleaning up "first-473488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-473488
--- PASS: TestMinikubeProfile (76.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-542706 --memory=3072 --mount-string /tmp/TestMountStartserial2801899804/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-542706 --memory=3072 --mount-string /tmp/TestMountStartserial2801899804/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.114300037s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-542706 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-542706 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-562554 --memory=3072 --mount-string /tmp/TestMountStartserial2801899804/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-562554 --memory=3072 --mount-string /tmp/TestMountStartserial2801899804/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.833146571s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-562554 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-562554 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-542706 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-562554 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-562554 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-562554
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-562554: (1.260840024s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-562554
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-562554: (17.117272061s)
--- PASS: TestMountStart/serial/RestartStopped (18.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-562554 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-562554 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-998010 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1115 10:13:11.986011  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-998010 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m40.526912305s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-998010 -- rollout status deployment/busybox: (4.591119348s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-h454d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-p2ln8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-h454d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-p2ln8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-h454d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-p2ln8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-h454d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-h454d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-p2ln8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-998010 -- exec busybox-7b57f96db7-p2ln8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-998010 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-998010 -v=5 --alsologtostderr: (41.205062687s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-998010 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp testdata/cp-test.txt multinode-998010:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2705533484/001/cp-test_multinode-998010.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010:/home/docker/cp-test.txt multinode-998010-m02:/home/docker/cp-test_multinode-998010_multinode-998010-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m02 "sudo cat /home/docker/cp-test_multinode-998010_multinode-998010-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010:/home/docker/cp-test.txt multinode-998010-m03:/home/docker/cp-test_multinode-998010_multinode-998010-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m03 "sudo cat /home/docker/cp-test_multinode-998010_multinode-998010-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp testdata/cp-test.txt multinode-998010-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2705533484/001/cp-test_multinode-998010-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010-m02:/home/docker/cp-test.txt multinode-998010:/home/docker/cp-test_multinode-998010-m02_multinode-998010.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010 "sudo cat /home/docker/cp-test_multinode-998010-m02_multinode-998010.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010-m02:/home/docker/cp-test.txt multinode-998010-m03:/home/docker/cp-test_multinode-998010-m02_multinode-998010-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m03 "sudo cat /home/docker/cp-test_multinode-998010-m02_multinode-998010-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp testdata/cp-test.txt multinode-998010-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2705533484/001/cp-test_multinode-998010-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010-m03:/home/docker/cp-test.txt multinode-998010:/home/docker/cp-test_multinode-998010-m03_multinode-998010.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010 "sudo cat /home/docker/cp-test_multinode-998010-m03_multinode-998010.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 cp multinode-998010-m03:/home/docker/cp-test.txt multinode-998010-m02:/home/docker/cp-test_multinode-998010-m03_multinode-998010-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 ssh -n multinode-998010-m02 "sudo cat /home/docker/cp-test_multinode-998010-m03_multinode-998010-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-998010 node stop m03: (1.51088885s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-998010 status: exit status 7 (346.656739ms)

                                                
                                                
-- stdout --
	multinode-998010
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-998010-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-998010-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr: exit status 7 (345.280344ms)

                                                
                                                
-- stdout --
	multinode-998010
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-998010-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-998010-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:15:09.568045  434955 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:15:09.568306  434955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:15:09.568314  434955 out.go:374] Setting ErrFile to fd 2...
	I1115 10:15:09.568319  434955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:15:09.568521  434955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:15:09.568705  434955 out.go:368] Setting JSON to false
	I1115 10:15:09.568735  434955 mustload.go:66] Loading cluster: multinode-998010
	I1115 10:15:09.568788  434955 notify.go:221] Checking for updates...
	I1115 10:15:09.569088  434955 config.go:182] Loaded profile config "multinode-998010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:15:09.569102  434955 status.go:174] checking status of multinode-998010 ...
	I1115 10:15:09.571149  434955 status.go:371] multinode-998010 host status = "Running" (err=<nil>)
	I1115 10:15:09.571167  434955 host.go:66] Checking if "multinode-998010" exists ...
	I1115 10:15:09.573459  434955 main.go:143] libmachine: domain multinode-998010 has defined MAC address 52:54:00:00:9b:a1 in network mk-multinode-998010
	I1115 10:15:09.573972  434955 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:9b:a1", ip: ""} in network mk-multinode-998010: {Iface:virbr1 ExpiryTime:2025-11-15 11:12:46 +0000 UTC Type:0 Mac:52:54:00:00:9b:a1 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-998010 Clientid:01:52:54:00:00:9b:a1}
	I1115 10:15:09.573999  434955 main.go:143] libmachine: domain multinode-998010 has defined IP address 192.168.39.194 and MAC address 52:54:00:00:9b:a1 in network mk-multinode-998010
	I1115 10:15:09.574116  434955 host.go:66] Checking if "multinode-998010" exists ...
	I1115 10:15:09.574316  434955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:15:09.576509  434955 main.go:143] libmachine: domain multinode-998010 has defined MAC address 52:54:00:00:9b:a1 in network mk-multinode-998010
	I1115 10:15:09.576912  434955 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:9b:a1", ip: ""} in network mk-multinode-998010: {Iface:virbr1 ExpiryTime:2025-11-15 11:12:46 +0000 UTC Type:0 Mac:52:54:00:00:9b:a1 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-998010 Clientid:01:52:54:00:00:9b:a1}
	I1115 10:15:09.576950  434955 main.go:143] libmachine: domain multinode-998010 has defined IP address 192.168.39.194 and MAC address 52:54:00:00:9b:a1 in network mk-multinode-998010
	I1115 10:15:09.577132  434955 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/multinode-998010/id_rsa Username:docker}
	I1115 10:15:09.669284  434955 ssh_runner.go:195] Run: systemctl --version
	I1115 10:15:09.675761  434955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:15:09.695741  434955 kubeconfig.go:125] found "multinode-998010" server: "https://192.168.39.194:8443"
	I1115 10:15:09.695777  434955 api_server.go:166] Checking apiserver status ...
	I1115 10:15:09.695814  434955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:15:09.718720  434955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup
	W1115 10:15:09.730621  434955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:15:09.730707  434955 ssh_runner.go:195] Run: ls
	I1115 10:15:09.735805  434955 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1115 10:15:09.740473  434955 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I1115 10:15:09.740508  434955 status.go:463] multinode-998010 apiserver status = Running (err=<nil>)
	I1115 10:15:09.740523  434955 status.go:176] multinode-998010 status: &{Name:multinode-998010 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:15:09.740550  434955 status.go:174] checking status of multinode-998010-m02 ...
	I1115 10:15:09.742413  434955 status.go:371] multinode-998010-m02 host status = "Running" (err=<nil>)
	I1115 10:15:09.742435  434955 host.go:66] Checking if "multinode-998010-m02" exists ...
	I1115 10:15:09.745149  434955 main.go:143] libmachine: domain multinode-998010-m02 has defined MAC address 52:54:00:79:bc:3d in network mk-multinode-998010
	I1115 10:15:09.745560  434955 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:79:bc:3d", ip: ""} in network mk-multinode-998010: {Iface:virbr1 ExpiryTime:2025-11-15 11:13:43 +0000 UTC Type:0 Mac:52:54:00:79:bc:3d Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-998010-m02 Clientid:01:52:54:00:79:bc:3d}
	I1115 10:15:09.745593  434955 main.go:143] libmachine: domain multinode-998010-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:79:bc:3d in network mk-multinode-998010
	I1115 10:15:09.745783  434955 host.go:66] Checking if "multinode-998010-m02" exists ...
	I1115 10:15:09.746023  434955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:15:09.748053  434955 main.go:143] libmachine: domain multinode-998010-m02 has defined MAC address 52:54:00:79:bc:3d in network mk-multinode-998010
	I1115 10:15:09.748412  434955 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:79:bc:3d", ip: ""} in network mk-multinode-998010: {Iface:virbr1 ExpiryTime:2025-11-15 11:13:43 +0000 UTC Type:0 Mac:52:54:00:79:bc:3d Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-998010-m02 Clientid:01:52:54:00:79:bc:3d}
	I1115 10:15:09.748438  434955 main.go:143] libmachine: domain multinode-998010-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:79:bc:3d in network mk-multinode-998010
	I1115 10:15:09.748568  434955 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21894-412813/.minikube/machines/multinode-998010-m02/id_rsa Username:docker}
	I1115 10:15:09.832337  434955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:15:09.848518  434955 status.go:176] multinode-998010-m02 status: &{Name:multinode-998010-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:15:09.848563  434955 status.go:174] checking status of multinode-998010-m03 ...
	I1115 10:15:09.850220  434955 status.go:371] multinode-998010-m03 host status = "Stopped" (err=<nil>)
	I1115 10:15:09.850243  434955 status.go:384] host is not running, skipping remaining checks
	I1115 10:15:09.850250  434955 status.go:176] multinode-998010-m03 status: &{Name:multinode-998010-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 node start m03 -v=5 --alsologtostderr
E1115 10:15:11.284775  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-998010 node start m03 -v=5 --alsologtostderr: (40.044519491s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (293.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-998010
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-998010
E1115 10:18:11.986238  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:18:14.358975  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-998010: (2m52.933417555s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-998010 --wait=true -v=5 --alsologtostderr
E1115 10:20:11.286860  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-998010 --wait=true -v=5 --alsologtostderr: (2m0.518754539s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-998010
--- PASS: TestMultiNode/serial/RestartKeepsNodes (293.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-998010 node delete m03: (2.208667713s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (172.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 stop
E1115 10:23:11.986304  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-998010 stop: (2m52.46764631s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-998010 status: exit status 7 (69.637136ms)

                                                
                                                
-- stdout --
	multinode-998010
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-998010-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr: exit status 7 (67.060952ms)

                                                
                                                
-- stdout --
	multinode-998010
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-998010-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:23:39.269364  437710 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:23:39.269632  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:23:39.269640  437710 out.go:374] Setting ErrFile to fd 2...
	I1115 10:23:39.269645  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:23:39.269872  437710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:23:39.270104  437710 out.go:368] Setting JSON to false
	I1115 10:23:39.270141  437710 mustload.go:66] Loading cluster: multinode-998010
	I1115 10:23:39.270200  437710 notify.go:221] Checking for updates...
	I1115 10:23:39.270512  437710 config.go:182] Loaded profile config "multinode-998010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:23:39.270528  437710 status.go:174] checking status of multinode-998010 ...
	I1115 10:23:39.272780  437710 status.go:371] multinode-998010 host status = "Stopped" (err=<nil>)
	I1115 10:23:39.272799  437710 status.go:384] host is not running, skipping remaining checks
	I1115 10:23:39.272804  437710 status.go:176] multinode-998010 status: &{Name:multinode-998010 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:23:39.272822  437710 status.go:174] checking status of multinode-998010-m02 ...
	I1115 10:23:39.274168  437710 status.go:371] multinode-998010-m02 host status = "Stopped" (err=<nil>)
	I1115 10:23:39.274184  437710 status.go:384] host is not running, skipping remaining checks
	I1115 10:23:39.274189  437710 status.go:176] multinode-998010-m02 status: &{Name:multinode-998010-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (172.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-998010 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-998010 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m23.371972786s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-998010 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-998010
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-998010-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-998010-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (83.988059ms)

                                                
                                                
-- stdout --
	* [multinode-998010-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-998010-m02' is duplicated with machine name 'multinode-998010-m02' in profile 'multinode-998010'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-998010-m03 --driver=kvm2  --container-runtime=crio
E1115 10:25:11.287998  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-998010-m03 --driver=kvm2  --container-runtime=crio: (38.98774966s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-998010
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-998010: exit status 80 (210.956088ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-998010 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-998010-m03 already exists in multinode-998010-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-998010-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.21s)

                                                
                                    
x
+
TestScheduledStopUnix (108.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-420823 --memory=3072 --driver=kvm2  --container-runtime=crio
E1115 10:28:11.985866  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-420823 --memory=3072 --driver=kvm2  --container-runtime=crio: (37.155132924s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-420823 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:28:28.550475  439949 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:28:28.550828  439949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:28.550840  439949 out.go:374] Setting ErrFile to fd 2...
	I1115 10:28:28.550845  439949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:28.551073  439949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:28:28.551350  439949 out.go:368] Setting JSON to false
	I1115 10:28:28.551463  439949 mustload.go:66] Loading cluster: scheduled-stop-420823
	I1115 10:28:28.551854  439949 config.go:182] Loaded profile config "scheduled-stop-420823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:28:28.551947  439949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/config.json ...
	I1115 10:28:28.552155  439949 mustload.go:66] Loading cluster: scheduled-stop-420823
	I1115 10:28:28.552285  439949 config.go:182] Loaded profile config "scheduled-stop-420823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-420823 -n scheduled-stop-420823
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-420823 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:28:28.850316  439994 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:28:28.850591  439994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:28.850602  439994 out.go:374] Setting ErrFile to fd 2...
	I1115 10:28:28.850608  439994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:28.850853  439994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:28:28.851121  439994 out.go:368] Setting JSON to false
	I1115 10:28:28.851340  439994 daemonize_unix.go:73] killing process 439982 as it is an old scheduled stop
	I1115 10:28:28.851462  439994 mustload.go:66] Loading cluster: scheduled-stop-420823
	I1115 10:28:28.851908  439994 config.go:182] Loaded profile config "scheduled-stop-420823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:28:28.851997  439994 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/config.json ...
	I1115 10:28:28.852198  439994 mustload.go:66] Loading cluster: scheduled-stop-420823
	I1115 10:28:28.852326  439994 config.go:182] Loaded profile config "scheduled-stop-420823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1115 10:28:28.857753  416801 retry.go:31] will retry after 139.999µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.858935  416801 retry.go:31] will retry after 163.402µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.860088  416801 retry.go:31] will retry after 330.722µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.861271  416801 retry.go:31] will retry after 357.587µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.862474  416801 retry.go:31] will retry after 757.482µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.863646  416801 retry.go:31] will retry after 489.364µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.864794  416801 retry.go:31] will retry after 1.560239ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.867008  416801 retry.go:31] will retry after 985.462µs: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.868134  416801 retry.go:31] will retry after 3.811827ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.872413  416801 retry.go:31] will retry after 2.96375ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.875646  416801 retry.go:31] will retry after 5.139527ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.881913  416801 retry.go:31] will retry after 9.066231ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.891127  416801 retry.go:31] will retry after 10.495017ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.902469  416801 retry.go:31] will retry after 20.805014ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.923825  416801 retry.go:31] will retry after 31.704748ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
I1115 10:28:28.956109  416801 retry.go:31] will retry after 52.609866ms: open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-420823 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-420823 -n scheduled-stop-420823
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-420823
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-420823 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:28:54.577359  440149 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:28:54.577589  440149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:54.577599  440149 out.go:374] Setting ErrFile to fd 2...
	I1115 10:28:54.577602  440149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:54.577845  440149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:28:54.578100  440149 out.go:368] Setting JSON to false
	I1115 10:28:54.578184  440149 mustload.go:66] Loading cluster: scheduled-stop-420823
	I1115 10:28:54.578504  440149 config.go:182] Loaded profile config "scheduled-stop-420823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:28:54.578569  440149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/scheduled-stop-420823/config.json ...
	I1115 10:28:54.578773  440149 mustload.go:66] Loading cluster: scheduled-stop-420823
	I1115 10:28:54.578876  440149 config.go:182] Loaded profile config "scheduled-stop-420823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-420823
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-420823: exit status 7 (65.445193ms)

                                                
                                                
-- stdout --
	scheduled-stop-420823
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-420823 -n scheduled-stop-420823
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-420823 -n scheduled-stop-420823: exit status 7 (63.98994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-420823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-420823
--- PASS: TestScheduledStopUnix (108.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (150.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4271926785 start -p running-upgrade-170832 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1115 10:30:11.285079  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4271926785 start -p running-upgrade-170832 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m37.762542701s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-170832 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-170832 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.176063171s)
helpers_test.go:175: Cleaning up "running-upgrade-170832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-170832
--- PASS: TestRunningBinaryUpgrade (150.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (140.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.747561489s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-546745
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-546745: (2.954426347s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-546745 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-546745 status --format={{.Host}}: exit status 7 (65.406633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.879547782s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-546745 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.175564ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-546745] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-546745
	    minikube start -p kubernetes-upgrade-546745 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5467452 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-546745 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-546745 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.595864763s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-546745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-546745
--- PASS: TestKubernetesUpgrade (140.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170129 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-170129 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (104.102695ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-170129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170129 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170129 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.087444061s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-170129 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (51.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170129 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170129 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.211069306s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-170129 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-170129 status -o json: exit status 2 (212.411623ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-170129","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-170129
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (51.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170129 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170129 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.814031356s)
--- PASS: TestNoKubernetes/serial/Start (46.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-765007 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-765007 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (138.7484ms)

                                                
                                                
-- stdout --
	* [false-765007] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:32:14.480111  443298 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:32:14.480355  443298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:32:14.480366  443298 out.go:374] Setting ErrFile to fd 2...
	I1115 10:32:14.480370  443298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:32:14.480621  443298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-412813/.minikube/bin
	I1115 10:32:14.481136  443298 out.go:368] Setting JSON to false
	I1115 10:32:14.482054  443298 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8082,"bootTime":1763194653,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:32:14.482163  443298 start.go:143] virtualization: kvm guest
	I1115 10:32:14.484191  443298 out.go:179] * [false-765007] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:32:14.485470  443298 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:32:14.485496  443298 notify.go:221] Checking for updates...
	I1115 10:32:14.488106  443298 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:32:14.489521  443298 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-412813/kubeconfig
	I1115 10:32:14.491408  443298 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-412813/.minikube
	I1115 10:32:14.492870  443298 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:32:14.494365  443298 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:32:14.496373  443298 config.go:182] Loaded profile config "NoKubernetes-170129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1115 10:32:14.496528  443298 config.go:182] Loaded profile config "cert-expiration-506364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:14.496718  443298 config.go:182] Loaded profile config "cert-options-636664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:14.496915  443298 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:32:14.543088  443298 out.go:179] * Using the kvm2 driver based on user configuration
	I1115 10:32:14.544549  443298 start.go:309] selected driver: kvm2
	I1115 10:32:14.544566  443298 start.go:930] validating driver "kvm2" against <nil>
	I1115 10:32:14.544579  443298 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:32:14.546791  443298 out.go:203] 
	W1115 10:32:14.548261  443298 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1115 10:32:14.549593  443298 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-765007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-765007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.33:8443
name: cert-expiration-506364
contexts:
- context:
cluster: cert-expiration-506364
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-506364
name: cert-expiration-506364
current-context: ""
kind: Config
users:
- name: cert-expiration-506364
user:
client-certificate: /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/cert-expiration-506364/client.crt
client-key: /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/cert-expiration-506364/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-765007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-765007"

                                                
                                                
----------------------- debugLogs end: false-765007 [took: 4.118958222s] --------------------------------
helpers_test.go:175: Cleaning up "false-765007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-765007
--- PASS: TestNetworkPlugins/group/false (4.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21894-412813/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-170129 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-170129 "sudo systemctl is-active --quiet service kubelet": exit status 1 (161.506222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1026069713 start -p stopped-upgrade-814289 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1026069713 start -p stopped-upgrade-814289 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (57.128821404s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1026069713 -p stopped-upgrade-814289 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1026069713 -p stopped-upgrade-814289 stop: (1.721565859s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-814289 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-814289 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.809779488s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-170129
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-170129: (1.291966859s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (54.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170129 --driver=kvm2  --container-runtime=crio
E1115 10:33:11.985711  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170129 --driver=kvm2  --container-runtime=crio: (54.201980666s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (54.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-170129 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-170129 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.348783ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (64.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-485426 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-485426 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m4.140582329s)
--- PASS: TestPause/serial/Start (64.14s)

                                                
                                    
x
+
TestISOImage/Setup (20.02s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-763099 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-763099 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.019469656s)
--- PASS: TestISOImage/Setup (20.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-814289
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-814289: (1.142315223s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1115 10:34:54.362231  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m12.965846127s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.97s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which wget"
E1115 10:42:47.295568  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1115 10:35:11.285095  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m26.270659262s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.866670126s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-765007 "pgrep -a kubelet"
I1115 10:36:02.905305  416801 config.go:182] Loaded profile config "auto-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b82pl" [aa4b7c2a-3dda-4854-969c-20cbe87f8040] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b82pl" [aa4b7c2a-3dda-4854-969c-20cbe87f8040] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005263645s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (80.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m20.733914098s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (80.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6sq2m" [e61fceef-48d7-4a02-80f9-c8d41169b9af] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003809529s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m12.406816688s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-765007 "pgrep -a kubelet"
I1115 10:36:37.269710  416801 config.go:182] Loaded profile config "kindnet-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cwt2f" [a1215a57-be27-41f3-8861-4cac96a15ffb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cwt2f" [a1215a57-be27-41f3-8861-4cac96a15ffb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007481726s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.726700454s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xtkvz" [18095047-28cd-48ba-ac8c-f746df6d109a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-xtkvz" [18095047-28cd-48ba-ac8c-f746df6d109a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006972229s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-765007 "pgrep -a kubelet"
I1115 10:37:19.897023  416801 config.go:182] Loaded profile config "calico-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xmcrp" [811f3374-5110-4c51-904e-dddb630c3869] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xmcrp" [811f3374-5110-4c51-904e-dddb630c3869] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005876309s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-765007 "pgrep -a kubelet"
I1115 10:37:38.972361  416801 config.go:182] Loaded profile config "custom-flannel-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ds9xq" [90441e6c-04a3-467f-a81f-ec8e4cb621d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ds9xq" [90441e6c-04a3-467f-a81f-ec8e4cb621d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004213123s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-765007 "pgrep -a kubelet"
I1115 10:37:44.423874  416801 config.go:182] Loaded profile config "enable-default-cni-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m8tt2" [21d8e2b5-958c-4ae4-b3f1-dfc572381bd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m8tt2" [21d8e2b5-958c-4ae4-b3f1-dfc572381bd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005801889s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-765007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (57.720041759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-470091 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-470091 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.993878921s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-693954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:38:11.985949  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-693954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m36.519683158s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vdzbq" [abda6125-4751-446f-aa94-0d6756bbcd8f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007231188s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-765007 "pgrep -a kubelet"
I1115 10:38:37.079153  416801 config.go:182] Loaded profile config "flannel-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-khq9z" [aae4b6d4-dc83-4766-8cee-8c8ee3335e63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-khq9z" [aae4b6d4-dc83-4766-8cee-8c8ee3335e63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.00544757s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-765007 "pgrep -a kubelet"
I1115 10:38:48.690468  416801 config.go:182] Loaded profile config "bridge-765007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-765007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zz66p" [2320aaae-5b0f-452c-84f1-c90552b4e96c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zz66p" [2320aaae-5b0f-452c-84f1-c90552b4e96c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005469849s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-765007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-765007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-647367 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-647367 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (54.827723843s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-470091 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6f493b23-1a5c-43cc-acea-f9a902c8463c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6f493b23-1a5c-43cc-acea-f9a902c8463c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004029324s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-470091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-421138 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-421138 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.678173206s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-470091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-470091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (78.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-470091 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-470091 --alsologtostderr -v=3: (1m18.940337566s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (78.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-693954 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [55e2b93a-5696-4ba5-917a-2c99872b20dc] Pending
helpers_test.go:352: "busybox" [55e2b93a-5696-4ba5-917a-2c99872b20dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [55e2b93a-5696-4ba5-917a-2c99872b20dc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00551485s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-693954 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-693954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-693954 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-693954 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-693954 --alsologtostderr -v=3: (1m29.546045176s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-647367 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [42fa516d-4bf3-481f-99e0-a3cb2e86f155] Pending
helpers_test.go:352: "busybox" [42fa516d-4bf3-481f-99e0-a3cb2e86f155] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [42fa516d-4bf3-481f-99e0-a3cb2e86f155] Running
E1115 10:40:11.284656  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/addons-965866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004308257s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-647367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-647367 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-647367 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (78.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-647367 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-647367 --alsologtostderr -v=3: (1m18.433746517s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (78.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-421138 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9fc03cb4-8ce2-41bd-ae7f-c8adbb0ee2f9] Pending
helpers_test.go:352: "busybox" [9fc03cb4-8ce2-41bd-ae7f-c8adbb0ee2f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9fc03cb4-8ce2-41bd-ae7f-c8adbb0ee2f9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005107453s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-421138 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-421138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-421138 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (78.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-421138 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-421138 --alsologtostderr -v=3: (1m18.588432571s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (78.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-470091 -n old-k8s-version-470091
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-470091 -n old-k8s-version-470091: exit status 7 (62.740902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-470091 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (40.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-470091 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1115 10:41:03.155622  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.162191  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.174368  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.196576  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.238229  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.320230  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.481972  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:03.803537  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:04.444863  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:05.726948  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:08.288897  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:13.410327  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-470091 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (40.196084132s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-470091 -n old-k8s-version-470091
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (40.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1115 10:41:23.652593  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xzzc2" [d5316315-1ff6-4897-ac0d-50b141b39118] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xzzc2" [d5316315-1ff6-4897-ac0d-50b141b39118] Running
E1115 10:41:31.719085  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:32.360961  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005317746s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-693954 -n no-preload-693954
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-693954 -n no-preload-693954: exit status 7 (74.778161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-693954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-693954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:41:31.072831  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:31.079267  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:31.090763  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:31.112292  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:31.153748  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:31.235295  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:31.397188  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-693954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (58.360485879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-693954 -n no-preload-693954
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-647367 -n embed-certs-647367
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-647367 -n embed-certs-647367: exit status 7 (89.932006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-647367 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-647367 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:41:33.642893  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:41:36.204743  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-647367 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (57.960652545s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-647367 -n embed-certs-647367
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xzzc2" [d5316315-1ff6-4897-ac0d-50b141b39118] Running
E1115 10:41:41.327045  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004496362s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-470091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-470091 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-470091 --alsologtostderr -v=1
E1115 10:41:44.134081  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-470091 -n old-k8s-version-470091
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-470091 -n old-k8s-version-470091: exit status 2 (221.046854ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-470091 -n old-k8s-version-470091
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-470091 -n old-k8s-version-470091: exit status 2 (235.967979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-470091 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-470091 -n old-k8s-version-470091
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-470091 -n old-k8s-version-470091
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-964307 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:41:51.569375  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-964307 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m6.134402473s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138: exit status 7 (71.675088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-421138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (75.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-421138 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:42:12.051431  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:13.681816  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:13.688343  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:13.699885  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:13.721421  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:13.762988  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:13.844947  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:14.006824  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:14.328756  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:14.970453  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:16.251895  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:18.814211  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:23.936549  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:25.095530  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/auto-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-421138 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.37107955s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (75.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7h2n6" [6f84cc9e-4264-4164-966e-f6fa7844a81a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008156345s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dmm5c" [550b21da-236b-4eb3-bac1-c691f1bbe0a3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dmm5c" [550b21da-236b-4eb3-bac1-c691f1bbe0a3] Running
E1115 10:42:34.178891  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004857702s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7h2n6" [6f84cc9e-4264-4164-966e-f6fa7844a81a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004812175s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-693954 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dmm5c" [550b21da-236b-4eb3-bac1-c691f1bbe0a3] Running
E1115 10:42:39.264528  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.270991  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.282531  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.304032  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.345456  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.427093  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.588653  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:39.910239  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:40.551816  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004294375s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-647367 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-693954 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-693954 --alsologtostderr -v=1
E1115 10:42:41.833866  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-693954 -n no-preload-693954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-693954 -n no-preload-693954: exit status 2 (269.800231ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-693954 -n no-preload-693954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-693954 -n no-preload-693954: exit status 2 (273.268925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-693954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-693954 -n no-preload-693954
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-693954 -n no-preload-693954
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-647367 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-647367 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-647367 --alsologtostderr -v=1: (1.000470463s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-647367 -n embed-certs-647367
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-647367 -n embed-certs-647367: exit status 2 (298.802849ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-647367 -n embed-certs-647367
E1115 10:42:44.395812  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-647367 -n embed-certs-647367: exit status 2 (310.424015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-647367 --alsologtostderr -v=1
E1115 10:42:44.723992  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:44.730527  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:44.742094  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-647367 -n embed-certs-647367
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-647367 -n embed-certs-647367
E1115 10:42:46.013492  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.17s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1762018871-21834
iso_test.go:118:   kicbase_version: v0.0.48-1760939008-21773
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 820bf516181cabed83ba2b27d39e21b2adf01240
--- PASS: TestISOImage/VersionJSON (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-763099 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)
E1115 10:42:53.013823  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/kindnet-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-964307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1115 10:42:54.660691  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/calico-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:54.979259  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:55.060463  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-964307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.061857877s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-964307 --alsologtostderr -v=3
E1115 10:42:59.759506  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:43:05.221223  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-964307 --alsologtostderr -v=3: (10.757986677s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964307 -n newest-cni-964307
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964307 -n newest-cni-964307: exit status 7 (74.76185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-964307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-964307 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-964307 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (31.675219818s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964307 -n newest-cni-964307
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8cd4w" [96b1eff7-5cea-47c7-a94d-a1ad44511064] Running
E1115 10:43:11.986293  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/functional-430000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004204793s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8cd4w" [96b1eff7-5cea-47c7-a94d-a1ad44511064] Running
E1115 10:43:20.240987  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/custom-flannel-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005760492s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-421138 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-421138 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-421138 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138: exit status 2 (260.39953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138: exit status 2 (227.898312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-421138 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138
E1115 10:43:25.702719  416801 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/enable-default-cni-765007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-421138 -n default-k8s-diff-port-421138
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-964307 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-964307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964307 -n newest-cni-964307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964307 -n newest-cni-964307: exit status 2 (215.509528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964307 -n newest-cni-964307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964307 -n newest-cni-964307: exit status 2 (215.816373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-964307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964307 -n newest-cni-964307
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964307 -n newest-cni-964307
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    

Test skip (35/346)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965866 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-765007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-765007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.33:8443
name: cert-expiration-506364
contexts:
- context:
cluster: cert-expiration-506364
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-506364
name: cert-expiration-506364
current-context: ""
kind: Config
users:
- name: cert-expiration-506364
user:
client-certificate: /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/cert-expiration-506364/client.crt
client-key: /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/cert-expiration-506364/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-765007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-765007"

                                                
                                                
----------------------- debugLogs end: kubenet-765007 [took: 3.418034185s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-765007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-765007
--- SKIP: TestNetworkPlugins/group/kubenet (3.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-765007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-765007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-412813/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.33:8443
name: cert-expiration-506364
contexts:
- context:
cluster: cert-expiration-506364
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-506364
name: cert-expiration-506364
current-context: ""
kind: Config
users:
- name: cert-expiration-506364
user:
client-certificate: /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/cert-expiration-506364/client.crt
client-key: /home/jenkins/minikube-integration/21894-412813/.minikube/profiles/cert-expiration-506364/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-765007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-765007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-765007"

                                                
                                                
----------------------- debugLogs end: cilium-765007 [took: 5.467636973s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-765007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-765007
--- SKIP: TestNetworkPlugins/group/cilium (5.65s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-726722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-726722
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard