Test Report: KVM_Linux_crio 21866

                    
                      77bc04e31513dc44a023e1d185fd1b44f1864364:2025-11-08:42249
                    
                

Test fail (3/344)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.63
244 TestPreload 165.54
303 TestPause/serial/SecondStartNoReconfiguration 59.98
x
+
TestAddons/parallel/Ingress (157.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-982714 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-982714 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-982714 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0f5c0b83-bcbf-47ea-aed4-d32a51f5b988] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0f5c0b83-bcbf-47ea-aed4-d32a51f5b988] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005268356s
I1108 08:32:49.696426    9745 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-982714 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.567353381s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-982714 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.224
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-982714 -n addons-982714
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 logs -n 25: (1.329464715s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-567976                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-567976 │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │ 08 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-801039 --alsologtostderr --binary-mirror http://127.0.0.1:39003 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-801039 │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-801039                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-801039 │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │ 08 Nov 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-982714                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-982714                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-982714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-982714 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-982714 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ enable headlamp -p addons-982714 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ ip      │ addons-982714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ ssh     │ addons-982714 ssh cat /opt/local-path-provisioner/pvc-6b247cde-862c-46c9-bdd2-91fe4ed25c39_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:33 UTC │
	│ addons  │ addons-982714 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ ssh     │ addons-982714 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-982714                                                                                                                                                                                                                                                                                                                                                                                         │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:32 UTC │
	│ addons  │ addons-982714 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │ 08 Nov 25 08:33 UTC │
	│ addons  │ addons-982714 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:33 UTC │ 08 Nov 25 08:33 UTC │
	│ addons  │ addons-982714 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:33 UTC │ 08 Nov 25 08:33 UTC │
	│ ip      │ addons-982714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-982714        │ jenkins │ v1.37.0 │ 08 Nov 25 08:35 UTC │ 08 Nov 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:29:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:29:39.443408   10436 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:29:39.443639   10436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:29:39.443648   10436 out.go:374] Setting ErrFile to fd 2...
	I1108 08:29:39.443653   10436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:29:39.443812   10436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 08:29:39.444294   10436 out.go:368] Setting JSON to false
	I1108 08:29:39.445080   10436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":720,"bootTime":1762589859,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:29:39.445195   10436 start.go:143] virtualization: kvm guest
	I1108 08:29:39.446755   10436 out.go:179] * [addons-982714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:29:39.447866   10436 notify.go:221] Checking for updates...
	I1108 08:29:39.447902   10436 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:29:39.449124   10436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:29:39.450352   10436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 08:29:39.451508   10436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:29:39.452514   10436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:29:39.453591   10436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:29:39.455280   10436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:29:39.484781   10436 out.go:179] * Using the kvm2 driver based on user configuration
	I1108 08:29:39.485729   10436 start.go:309] selected driver: kvm2
	I1108 08:29:39.485740   10436 start.go:930] validating driver "kvm2" against <nil>
	I1108 08:29:39.485750   10436 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:29:39.486370   10436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:29:39.486608   10436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 08:29:39.486634   10436 cni.go:84] Creating CNI manager for ""
	I1108 08:29:39.486674   10436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 08:29:39.486682   10436 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1108 08:29:39.486728   10436 start.go:353] cluster config:
	{Name:addons-982714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-982714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1108 08:29:39.486814   10436 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 08:29:39.488030   10436 out.go:179] * Starting "addons-982714" primary control-plane node in "addons-982714" cluster
	I1108 08:29:39.489005   10436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:39.489031   10436 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 08:29:39.489041   10436 cache.go:59] Caching tarball of preloaded images
	I1108 08:29:39.489105   10436 preload.go:233] Found /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 08:29:39.489115   10436 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 08:29:39.489378   10436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/config.json ...
	I1108 08:29:39.489395   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/config.json: {Name:mk59b012fe76d542aeb6e8a46cc1df773b217b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:39.489528   10436 start.go:360] acquireMachinesLock for addons-982714: {Name:mk17d57b1ca3eb78588f74785db7bcd997a10966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 08:29:39.489571   10436 start.go:364] duration metric: took 30.466µs to acquireMachinesLock for "addons-982714"
	I1108 08:29:39.489588   10436 start.go:93] Provisioning new machine with config: &{Name:addons-982714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-982714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 08:29:39.489639   10436 start.go:125] createHost starting for "" (driver="kvm2")
	I1108 08:29:39.491608   10436 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1108 08:29:39.491739   10436 start.go:159] libmachine.API.Create for "addons-982714" (driver="kvm2")
	I1108 08:29:39.491764   10436 client.go:173] LocalClient.Create starting
	I1108 08:29:39.491840   10436 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem
	I1108 08:29:39.689400   10436 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem
	I1108 08:29:39.763923   10436 main.go:143] libmachine: creating domain...
	I1108 08:29:39.763942   10436 main.go:143] libmachine: creating network...
	I1108 08:29:39.765146   10436 main.go:143] libmachine: found existing default network
	I1108 08:29:39.765356   10436 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 08:29:39.765832   10436 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00197df10}
	I1108 08:29:39.765905   10436 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-982714</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 08:29:39.771443   10436 main.go:143] libmachine: creating private network mk-addons-982714 192.168.39.0/24...
	I1108 08:29:39.832813   10436 main.go:143] libmachine: private network mk-addons-982714 192.168.39.0/24 created
	I1108 08:29:39.833098   10436 main.go:143] libmachine: <network>
	  <name>mk-addons-982714</name>
	  <uuid>c5b0fe19-2ab2-4a20-b933-63a510f64fa3</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:a2:80:e1'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 08:29:39.833126   10436 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714 ...
	I1108 08:29:39.833147   10436 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1108 08:29:39.833155   10436 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:29:39.833223   10436 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21866-5845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1108 08:29:40.120362   10436 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa...
	I1108 08:29:40.204585   10436 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/addons-982714.rawdisk...
	I1108 08:29:40.204621   10436 main.go:143] libmachine: Writing magic tar header
	I1108 08:29:40.204641   10436 main.go:143] libmachine: Writing SSH key tar header
	I1108 08:29:40.204710   10436 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714 ...
	I1108 08:29:40.204767   10436 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714
	I1108 08:29:40.204799   10436 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714 (perms=drwx------)
	I1108 08:29:40.204817   10436 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube/machines
	I1108 08:29:40.204829   10436 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube/machines (perms=drwxr-xr-x)
	I1108 08:29:40.204842   10436 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:29:40.204853   10436 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube (perms=drwxr-xr-x)
	I1108 08:29:40.204862   10436 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845
	I1108 08:29:40.204871   10436 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845 (perms=drwxrwxr-x)
	I1108 08:29:40.204881   10436 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1108 08:29:40.204892   10436 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1108 08:29:40.204899   10436 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1108 08:29:40.204908   10436 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1108 08:29:40.204917   10436 main.go:143] libmachine: checking permissions on dir: /home
	I1108 08:29:40.204926   10436 main.go:143] libmachine: skipping /home - not owner
	I1108 08:29:40.204930   10436 main.go:143] libmachine: defining domain...
	I1108 08:29:40.206214   10436 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-982714</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/addons-982714.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-982714'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1108 08:29:40.213643   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:1f:c9:4a in network default
	I1108 08:29:40.214330   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:40.214347   10436 main.go:143] libmachine: starting domain...
	I1108 08:29:40.214352   10436 main.go:143] libmachine: ensuring networks are active...
	I1108 08:29:40.215282   10436 main.go:143] libmachine: Ensuring network default is active
	I1108 08:29:40.215759   10436 main.go:143] libmachine: Ensuring network mk-addons-982714 is active
	I1108 08:29:40.216532   10436 main.go:143] libmachine: getting domain XML...
	I1108 08:29:40.217596   10436 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-982714</name>
	  <uuid>fa06b131-7f1e-49db-98b5-24ce9a4473c2</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/addons-982714.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:84:e4:dc'/>
	      <source network='mk-addons-982714'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:1f:c9:4a'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1108 08:29:41.455100   10436 main.go:143] libmachine: waiting for domain to start...
	I1108 08:29:41.456568   10436 main.go:143] libmachine: domain is now running
	I1108 08:29:41.456587   10436 main.go:143] libmachine: waiting for IP...
	I1108 08:29:41.457298   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:41.457814   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:41.457827   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:41.458106   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:41.458158   10436 retry.go:31] will retry after 227.95219ms: waiting for domain to come up
	I1108 08:29:41.687612   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:41.688172   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:41.688192   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:41.688508   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:41.688545   10436 retry.go:31] will retry after 253.72962ms: waiting for domain to come up
	I1108 08:29:41.943985   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:41.944440   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:41.944458   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:41.944805   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:41.944840   10436 retry.go:31] will retry after 342.0306ms: waiting for domain to come up
	I1108 08:29:42.288228   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:42.288732   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:42.288748   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:42.289072   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:42.289104   10436 retry.go:31] will retry after 541.566728ms: waiting for domain to come up
	I1108 08:29:42.831784   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:42.832251   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:42.832267   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:42.832594   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:42.832621   10436 retry.go:31] will retry after 532.101639ms: waiting for domain to come up
	I1108 08:29:43.366280   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:43.366802   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:43.366840   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:43.367090   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:43.367119   10436 retry.go:31] will retry after 878.757094ms: waiting for domain to come up
	I1108 08:29:44.247155   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:44.247724   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:44.247742   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:44.247986   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:44.248030   10436 retry.go:31] will retry after 1.096876927s: waiting for domain to come up
	I1108 08:29:45.346600   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:45.347095   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:45.347125   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:45.347404   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:45.347439   10436 retry.go:31] will retry after 1.271050324s: waiting for domain to come up
	I1108 08:29:46.620792   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:46.621294   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:46.621308   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:46.621572   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:46.621607   10436 retry.go:31] will retry after 1.302682934s: waiting for domain to come up
	I1108 08:29:47.926162   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:47.926702   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:47.926721   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:47.926989   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:47.927026   10436 retry.go:31] will retry after 2.226003477s: waiting for domain to come up
	I1108 08:29:50.154650   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:50.155230   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:50.155253   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:50.155574   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:50.155617   10436 retry.go:31] will retry after 2.471564924s: waiting for domain to come up
	I1108 08:29:52.629595   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:52.630118   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:52.630135   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:52.630427   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:52.630478   10436 retry.go:31] will retry after 2.29222248s: waiting for domain to come up
	I1108 08:29:54.923955   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:54.924411   10436 main.go:143] libmachine: no network interface addresses found for domain addons-982714 (source=lease)
	I1108 08:29:54.924427   10436 main.go:143] libmachine: trying to list again with source=arp
	I1108 08:29:54.924707   10436 main.go:143] libmachine: unable to find current IP address of domain addons-982714 in network mk-addons-982714 (interfaces detected: [])
	I1108 08:29:54.924744   10436 retry.go:31] will retry after 2.799374372s: waiting for domain to come up
	I1108 08:29:57.727612   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:57.728202   10436 main.go:143] libmachine: domain addons-982714 has current primary IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:57.728218   10436 main.go:143] libmachine: found domain IP: 192.168.39.224
	I1108 08:29:57.728225   10436 main.go:143] libmachine: reserving static IP address...
	I1108 08:29:57.728639   10436 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-982714", mac: "52:54:00:84:e4:dc", ip: "192.168.39.224"} in network mk-addons-982714
	I1108 08:29:57.910955   10436 main.go:143] libmachine: reserved static IP address 192.168.39.224 for domain addons-982714
	I1108 08:29:57.910976   10436 main.go:143] libmachine: waiting for SSH...
	I1108 08:29:57.910987   10436 main.go:143] libmachine: Getting to WaitForSSH function...
	I1108 08:29:57.913786   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:57.914202   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:minikube Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:57.914237   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:57.914455   10436 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:57.914710   10436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1108 08:29:57.914724   10436 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1108 08:29:58.027003   10436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 08:29:58.027330   10436 main.go:143] libmachine: domain creation complete
	I1108 08:29:58.028685   10436 machine.go:94] provisionDockerMachine start ...
	I1108 08:29:58.031180   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.031545   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:58.031570   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.031722   10436 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:58.031920   10436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1108 08:29:58.031933   10436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 08:29:58.145193   10436 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1108 08:29:58.145218   10436 buildroot.go:166] provisioning hostname "addons-982714"
	I1108 08:29:58.147651   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.147998   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:58.148032   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.148192   10436 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:58.148393   10436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1108 08:29:58.148409   10436 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-982714 && echo "addons-982714" | sudo tee /etc/hostname
	I1108 08:29:58.275060   10436 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-982714
	
	I1108 08:29:58.277593   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.277937   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:58.277962   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.278133   10436 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:58.278311   10436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1108 08:29:58.278327   10436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-982714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-982714/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-982714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 08:29:58.398937   10436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 08:29:58.399008   10436 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5845/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5845/.minikube}
	I1108 08:29:58.399048   10436 buildroot.go:174] setting up certificates
	I1108 08:29:58.399062   10436 provision.go:84] configureAuth start
	I1108 08:29:58.402040   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.402516   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:58.402547   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.405070   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.405562   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:58.405599   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.405774   10436 provision.go:143] copyHostCerts
	I1108 08:29:58.405850   10436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem (1123 bytes)
	I1108 08:29:58.406004   10436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem (1675 bytes)
	I1108 08:29:58.406082   10436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem (1082 bytes)
	I1108 08:29:58.406142   10436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem org=jenkins.addons-982714 san=[127.0.0.1 192.168.39.224 addons-982714 localhost minikube]
	I1108 08:29:58.958210   10436 provision.go:177] copyRemoteCerts
	I1108 08:29:58.958265   10436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 08:29:58.960826   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.961177   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:58.961199   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:58.961351   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:29:59.050355   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 08:29:59.080310   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 08:29:59.109225   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 08:29:59.138776   10436 provision.go:87] duration metric: took 739.697224ms to configureAuth
	I1108 08:29:59.138806   10436 buildroot.go:189] setting minikube options for container-runtime
	I1108 08:29:59.138994   10436 config.go:182] Loaded profile config "addons-982714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:59.141743   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.142079   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.142106   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.142288   10436 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:59.142461   10436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1108 08:29:59.142474   10436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 08:29:59.384377   10436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 08:29:59.384401   10436 machine.go:97] duration metric: took 1.355700625s to provisionDockerMachine
	I1108 08:29:59.384412   10436 client.go:176] duration metric: took 19.892640588s to LocalClient.Create
	I1108 08:29:59.384425   10436 start.go:167] duration metric: took 19.892684937s to libmachine.API.Create "addons-982714"
	I1108 08:29:59.384434   10436 start.go:293] postStartSetup for "addons-982714" (driver="kvm2")
	I1108 08:29:59.384448   10436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 08:29:59.384634   10436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 08:29:59.387611   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.388020   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.388042   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.388216   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:29:59.474212   10436 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 08:29:59.479168   10436 info.go:137] Remote host: Buildroot 2025.02
	I1108 08:29:59.479193   10436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/addons for local assets ...
	I1108 08:29:59.479275   10436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/files for local assets ...
	I1108 08:29:59.479309   10436 start.go:296] duration metric: took 94.867635ms for postStartSetup
	I1108 08:29:59.482158   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.482576   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.482606   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.482816   10436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/config.json ...
	I1108 08:29:59.483014   10436 start.go:128] duration metric: took 19.993365865s to createHost
	I1108 08:29:59.485147   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.485477   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.485522   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.485689   10436 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:59.485919   10436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1108 08:29:59.485931   10436 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1108 08:29:59.598206   10436 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762590599.561438952
	
	I1108 08:29:59.598234   10436 fix.go:216] guest clock: 1762590599.561438952
	I1108 08:29:59.598244   10436 fix.go:229] Guest: 2025-11-08 08:29:59.561438952 +0000 UTC Remote: 2025-11-08 08:29:59.483027142 +0000 UTC m=+20.086806597 (delta=78.41181ms)
	I1108 08:29:59.598269   10436 fix.go:200] guest clock delta is within tolerance: 78.41181ms
	I1108 08:29:59.598274   10436 start.go:83] releasing machines lock for "addons-982714", held for 20.108694081s
	I1108 08:29:59.601166   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.601540   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.601562   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.602096   10436 ssh_runner.go:195] Run: cat /version.json
	I1108 08:29:59.602124   10436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 08:29:59.605110   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.605144   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.605474   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.605510   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.605553   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:29:59.605584   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:29:59.605646   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:29:59.605873   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:29:59.709982   10436 ssh_runner.go:195] Run: systemctl --version
	I1108 08:29:59.715830   10436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 08:29:59.872684   10436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 08:29:59.879947   10436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 08:29:59.880031   10436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 08:29:59.900280   10436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 08:29:59.900317   10436 start.go:496] detecting cgroup driver to use...
	I1108 08:29:59.900393   10436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 08:29:59.920249   10436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 08:29:59.936881   10436 docker.go:218] disabling cri-docker service (if available) ...
	I1108 08:29:59.936930   10436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 08:29:59.954454   10436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 08:29:59.969835   10436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 08:30:00.118377   10436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 08:30:00.329414   10436 docker.go:234] disabling docker service ...
	I1108 08:30:00.329492   10436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 08:30:00.345803   10436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 08:30:00.360269   10436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 08:30:00.507744   10436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 08:30:00.651647   10436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 08:30:00.667553   10436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 08:30:00.690348   10436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 08:30:00.690421   10436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.702602   10436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 08:30:00.702656   10436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.715370   10436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.727779   10436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.739918   10436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 08:30:00.752854   10436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.764629   10436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.784572   10436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:30:00.796825   10436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 08:30:00.806981   10436 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 08:30:00.807039   10436 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 08:30:00.829976   10436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 08:30:00.842468   10436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:30:00.978556   10436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 08:30:01.099400   10436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 08:30:01.099476   10436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 08:30:01.105201   10436 start.go:564] Will wait 60s for crictl version
	I1108 08:30:01.105292   10436 ssh_runner.go:195] Run: which crictl
	I1108 08:30:01.109679   10436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 08:30:01.152361   10436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1108 08:30:01.152473   10436 ssh_runner.go:195] Run: crio --version
	I1108 08:30:01.183090   10436 ssh_runner.go:195] Run: crio --version
	I1108 08:30:01.215662   10436 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1108 08:30:01.219816   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:01.220234   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:01.220261   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:01.220482   10436 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 08:30:01.225411   10436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 08:30:01.241666   10436 kubeadm.go:884] updating cluster {Name:addons-982714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-982714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 08:30:01.241801   10436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:30:01.241847   10436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 08:30:01.278591   10436 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1108 08:30:01.278678   10436 ssh_runner.go:195] Run: which lz4
	I1108 08:30:01.283289   10436 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1108 08:30:01.288463   10436 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 08:30:01.288519   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1108 08:30:02.912704   10436 crio.go:462] duration metric: took 1.62945322s to copy over tarball
	I1108 08:30:02.912775   10436 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 08:30:04.596372   10436 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.683568219s)
	I1108 08:30:04.596400   10436 crio.go:469] duration metric: took 1.683667747s to extract the tarball
	I1108 08:30:04.596422   10436 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 08:30:04.638347   10436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 08:30:04.686177   10436 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 08:30:04.686200   10436 cache_images.go:86] Images are preloaded, skipping loading
	I1108 08:30:04.686208   10436 kubeadm.go:935] updating node { 192.168.39.224 8443 v1.34.1 crio true true} ...
	I1108 08:30:04.686282   10436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-982714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-982714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 08:30:04.686345   10436 ssh_runner.go:195] Run: crio config
	I1108 08:30:04.736526   10436 cni.go:84] Creating CNI manager for ""
	I1108 08:30:04.736555   10436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 08:30:04.736576   10436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 08:30:04.736594   10436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-982714 NodeName:addons-982714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 08:30:04.736706   10436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-982714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.224"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 08:30:04.736760   10436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 08:30:04.749740   10436 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 08:30:04.749812   10436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 08:30:04.762677   10436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1108 08:30:04.786037   10436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 08:30:04.808338   10436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1108 08:30:04.830561   10436 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I1108 08:30:04.834841   10436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 08:30:04.851020   10436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:30:04.995775   10436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 08:30:05.040962   10436 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714 for IP: 192.168.39.224
	I1108 08:30:05.040987   10436 certs.go:195] generating shared ca certs ...
	I1108 08:30:05.041007   10436 certs.go:227] acquiring lock for ca certs: {Name:mkf9b4566d45fc9bb33b533126e27cef8349b756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.041187   10436 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key
	I1108 08:30:05.387235   10436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt ...
	I1108 08:30:05.387267   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt: {Name:mk3ac7ed82f595935d2a72aaddaae1a34410df21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.387482   10436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key ...
	I1108 08:30:05.387511   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key: {Name:mkb5d7cd7da0f4454c3d99dadcf29214e477943b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.387625   10436 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key
	I1108 08:30:05.494623   10436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.crt ...
	I1108 08:30:05.494650   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.crt: {Name:mk572ed1e755eb044aac401e4d9dfe3b8be71757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.494834   10436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key ...
	I1108 08:30:05.494849   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key: {Name:mk42c06bd86f7cd42ce83b31ca6fa7179036a0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.495419   10436 certs.go:257] generating profile certs ...
	I1108 08:30:05.495487   10436 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.key
	I1108 08:30:05.495525   10436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt with IP's: []
	I1108 08:30:05.678814   10436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt ...
	I1108 08:30:05.678842   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: {Name:mk4ea1664bd52885262821096336ea127067f81a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.679032   10436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.key ...
	I1108 08:30:05.679050   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.key: {Name:mkf584e272276e3ff641699f5855a3c01e589879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:05.679154   10436 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.key.1ea7599a
	I1108 08:30:05.679181   10436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.crt.1ea7599a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.224]
	I1108 08:30:06.091173   10436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.crt.1ea7599a ...
	I1108 08:30:06.091205   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.crt.1ea7599a: {Name:mk82680e0e292b9314c5a94530f5cbfd1b672aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:06.091394   10436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.key.1ea7599a ...
	I1108 08:30:06.091413   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.key.1ea7599a: {Name:mkfecad36746d83db67ed0f49e1bcec208c990b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:06.091533   10436 certs.go:382] copying /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.crt.1ea7599a -> /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.crt
	I1108 08:30:06.091660   10436 certs.go:386] copying /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.key.1ea7599a -> /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.key
	I1108 08:30:06.091743   10436 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.key
	I1108 08:30:06.091766   10436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.crt with IP's: []
	I1108 08:30:06.149869   10436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.crt ...
	I1108 08:30:06.149898   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.crt: {Name:mkd2957b593109851d5a0ed8cde1683b0b9e6b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:06.150091   10436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.key ...
	I1108 08:30:06.150107   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.key: {Name:mk6560e675e44f11cbabbf34f1fb169a6aab95c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:06.150326   10436 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 08:30:06.150366   10436 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem (1082 bytes)
	I1108 08:30:06.150402   10436 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem (1123 bytes)
	I1108 08:30:06.150429   10436 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem (1675 bytes)
	I1108 08:30:06.151025   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 08:30:06.185077   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 08:30:06.218608   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 08:30:06.251599   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 08:30:06.284381   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 08:30:06.316517   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 08:30:06.348929   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 08:30:06.381744   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 08:30:06.414643   10436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 08:30:06.447922   10436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 08:30:06.471026   10436 ssh_runner.go:195] Run: openssl version
	I1108 08:30:06.478201   10436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 08:30:06.492904   10436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:30:06.498729   10436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:30:06.498790   10436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:30:06.507181   10436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 08:30:06.522142   10436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 08:30:06.527827   10436 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 08:30:06.527903   10436 kubeadm.go:401] StartCluster: {Name:addons-982714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-982714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:30:06.527995   10436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:30:06.528043   10436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:30:06.571439   10436 cri.go:89] found id: ""
	I1108 08:30:06.571519   10436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 08:30:06.585091   10436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 08:30:06.598507   10436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 08:30:06.611796   10436 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 08:30:06.611815   10436 kubeadm.go:158] found existing configuration files:
	
	I1108 08:30:06.611870   10436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 08:30:06.626285   10436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 08:30:06.626353   10436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 08:30:06.640379   10436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 08:30:06.656555   10436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 08:30:06.656631   10436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 08:30:06.672857   10436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 08:30:06.691288   10436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 08:30:06.691375   10436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 08:30:06.707290   10436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 08:30:06.719562   10436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 08:30:06.719629   10436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 08:30:06.732238   10436 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 08:30:06.786337   10436 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 08:30:06.786415   10436 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 08:30:06.891848   10436 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 08:30:06.892038   10436 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 08:30:06.892162   10436 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 08:30:06.903069   10436 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 08:30:07.116789   10436 out.go:252]   - Generating certificates and keys ...
	I1108 08:30:07.116914   10436 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 08:30:07.116983   10436 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 08:30:07.123459   10436 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 08:30:07.168597   10436 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 08:30:07.284005   10436 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 08:30:07.632633   10436 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 08:30:07.833695   10436 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 08:30:07.833952   10436 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-982714 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I1108 08:30:08.043367   10436 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 08:30:08.043648   10436 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-982714 localhost] and IPs [192.168.39.224 127.0.0.1 ::1]
	I1108 08:30:08.250855   10436 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 08:30:08.563651   10436 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 08:30:08.651395   10436 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 08:30:08.651521   10436 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 08:30:08.751575   10436 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 08:30:09.078760   10436 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 08:30:09.548931   10436 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 08:30:09.800739   10436 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 08:30:09.855832   10436 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 08:30:09.855993   10436 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 08:30:09.859453   10436 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 08:30:09.861399   10436 out.go:252]   - Booting up control plane ...
	I1108 08:30:09.861518   10436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 08:30:09.861634   10436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 08:30:09.861864   10436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 08:30:09.889951   10436 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 08:30:09.890083   10436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 08:30:09.901436   10436 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 08:30:09.901901   10436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 08:30:09.901959   10436 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 08:30:10.094285   10436 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 08:30:10.094412   10436 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 08:30:10.595403   10436 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.749373ms
	I1108 08:30:10.602003   10436 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 08:30:10.602118   10436 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.224:8443/livez
	I1108 08:30:10.602266   10436 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 08:30:10.602391   10436 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 08:30:12.863526   10436 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.262638335s
	I1108 08:30:14.489160   10436 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.890593904s
	I1108 08:30:16.599351   10436 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001596134s
	I1108 08:30:16.613833   10436 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 08:30:16.635262   10436 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 08:30:16.659278   10436 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 08:30:16.659514   10436 kubeadm.go:319] [mark-control-plane] Marking the node addons-982714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 08:30:16.674453   10436 kubeadm.go:319] [bootstrap-token] Using token: wz6uwc.xinyuor96l2duhpx
	I1108 08:30:16.676267   10436 out.go:252]   - Configuring RBAC rules ...
	I1108 08:30:16.676400   10436 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 08:30:16.688738   10436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 08:30:16.701508   10436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 08:30:16.705725   10436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 08:30:16.710962   10436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 08:30:16.714438   10436 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 08:30:17.007861   10436 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 08:30:17.503235   10436 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 08:30:18.004996   10436 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 08:30:18.006244   10436 kubeadm.go:319] 
	I1108 08:30:18.006333   10436 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 08:30:18.006342   10436 kubeadm.go:319] 
	I1108 08:30:18.006449   10436 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 08:30:18.006475   10436 kubeadm.go:319] 
	I1108 08:30:18.006536   10436 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 08:30:18.006608   10436 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 08:30:18.006704   10436 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 08:30:18.006725   10436 kubeadm.go:319] 
	I1108 08:30:18.006806   10436 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 08:30:18.006820   10436 kubeadm.go:319] 
	I1108 08:30:18.006897   10436 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 08:30:18.006915   10436 kubeadm.go:319] 
	I1108 08:30:18.006987   10436 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 08:30:18.007085   10436 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 08:30:18.007180   10436 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 08:30:18.007191   10436 kubeadm.go:319] 
	I1108 08:30:18.007312   10436 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 08:30:18.007426   10436 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 08:30:18.007447   10436 kubeadm.go:319] 
	I1108 08:30:18.007564   10436 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wz6uwc.xinyuor96l2duhpx \
	I1108 08:30:18.007698   10436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:125e259a8f07963bb73ede80e28d377ea6fea7352ad3952bda5349cb7a425ca0 \
	I1108 08:30:18.007730   10436 kubeadm.go:319] 	--control-plane 
	I1108 08:30:18.007738   10436 kubeadm.go:319] 
	I1108 08:30:18.007851   10436 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 08:30:18.007861   10436 kubeadm.go:319] 
	I1108 08:30:18.007959   10436 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wz6uwc.xinyuor96l2duhpx \
	I1108 08:30:18.008091   10436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:125e259a8f07963bb73ede80e28d377ea6fea7352ad3952bda5349cb7a425ca0 
	I1108 08:30:18.009996   10436 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 08:30:18.010030   10436 cni.go:84] Creating CNI manager for ""
	I1108 08:30:18.010039   10436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 08:30:18.012042   10436 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 08:30:18.013278   10436 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 08:30:18.027448   10436 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1108 08:30:18.058082   10436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 08:30:18.058210   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:18.058266   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-982714 minikube.k8s.io/updated_at=2025_11_08T08_30_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=addons-982714 minikube.k8s.io/primary=true
	I1108 08:30:18.106252   10436 ops.go:34] apiserver oom_adj: -16
	I1108 08:30:18.263227   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:18.763305   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:19.263880   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:19.763881   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:20.263880   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:20.764166   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:21.264070   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:21.763361   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:22.263598   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:22.763586   10436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:30:22.842943   10436 kubeadm.go:1114] duration metric: took 4.78479383s to wait for elevateKubeSystemPrivileges
	I1108 08:30:22.842984   10436 kubeadm.go:403] duration metric: took 16.315089536s to StartCluster
	I1108 08:30:22.843013   10436 settings.go:142] acquiring lock: {Name:mk0d0617389eeb9d724259ab95a170c08eef0474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:22.843159   10436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 08:30:22.843673   10436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:30:22.843876   10436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 08:30:22.843912   10436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 08:30:22.843949   10436 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 08:30:22.844090   10436 addons.go:70] Setting yakd=true in profile "addons-982714"
	I1108 08:30:22.844111   10436 config.go:182] Loaded profile config "addons-982714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:30:22.844124   10436 addons.go:70] Setting storage-provisioner=true in profile "addons-982714"
	I1108 08:30:22.844136   10436 addons.go:239] Setting addon storage-provisioner=true in "addons-982714"
	I1108 08:30:22.844117   10436 addons.go:239] Setting addon yakd=true in "addons-982714"
	I1108 08:30:22.844156   10436 addons.go:70] Setting inspektor-gadget=true in profile "addons-982714"
	I1108 08:30:22.844180   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.844185   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.844182   10436 addons.go:70] Setting registry-creds=true in profile "addons-982714"
	I1108 08:30:22.844214   10436 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-982714"
	I1108 08:30:22.844226   10436 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-982714"
	I1108 08:30:22.844228   10436 addons.go:239] Setting addon registry-creds=true in "addons-982714"
	I1108 08:30:22.844231   10436 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-982714"
	I1108 08:30:22.844272   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.844261   10436 addons.go:70] Setting registry=true in profile "addons-982714"
	I1108 08:30:22.844286   10436 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-982714"
	I1108 08:30:22.844297   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.844299   10436 addons.go:239] Setting addon registry=true in "addons-982714"
	I1108 08:30:22.844337   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.844346   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.845022   10436 addons.go:70] Setting volcano=true in profile "addons-982714"
	I1108 08:30:22.845049   10436 addons.go:239] Setting addon volcano=true in "addons-982714"
	I1108 08:30:22.845075   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.845079   10436 addons.go:70] Setting metrics-server=true in profile "addons-982714"
	I1108 08:30:22.845103   10436 addons.go:239] Setting addon metrics-server=true in "addons-982714"
	I1108 08:30:22.845125   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.845239   10436 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-982714"
	I1108 08:30:22.845261   10436 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-982714"
	I1108 08:30:22.845288   10436 addons.go:70] Setting volumesnapshots=true in profile "addons-982714"
	I1108 08:30:22.845302   10436 addons.go:239] Setting addon volumesnapshots=true in "addons-982714"
	I1108 08:30:22.845325   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.845357   10436 addons.go:70] Setting cloud-spanner=true in profile "addons-982714"
	I1108 08:30:22.845375   10436 addons.go:239] Setting addon cloud-spanner=true in "addons-982714"
	I1108 08:30:22.845398   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.844207   10436 addons.go:239] Setting addon inspektor-gadget=true in "addons-982714"
	I1108 08:30:22.845535   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.846011   10436 addons.go:70] Setting gcp-auth=true in profile "addons-982714"
	I1108 08:30:22.846037   10436 mustload.go:66] Loading cluster: addons-982714
	I1108 08:30:22.846251   10436 config.go:182] Loaded profile config "addons-982714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:30:22.846309   10436 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-982714"
	I1108 08:30:22.846387   10436 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-982714"
	I1108 08:30:22.846415   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.846513   10436 addons.go:70] Setting default-storageclass=true in profile "addons-982714"
	I1108 08:30:22.846537   10436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-982714"
	I1108 08:30:22.846599   10436 addons.go:70] Setting ingress=true in profile "addons-982714"
	I1108 08:30:22.846618   10436 addons.go:239] Setting addon ingress=true in "addons-982714"
	I1108 08:30:22.846645   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.847043   10436 addons.go:70] Setting ingress-dns=true in profile "addons-982714"
	I1108 08:30:22.847064   10436 addons.go:239] Setting addon ingress-dns=true in "addons-982714"
	I1108 08:30:22.847100   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.847208   10436 out.go:179] * Verifying Kubernetes components...
	I1108 08:30:22.848589   10436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:30:22.850998   10436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 08:30:22.851009   10436 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 08:30:22.851038   10436 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 08:30:22.851136   10436 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 08:30:22.851144   10436 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 08:30:22.852301   10436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 08:30:22.852322   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W1108 08:30:22.852513   10436 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 08:30:22.853186   10436 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 08:30:22.853192   10436 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 08:30:22.853206   10436 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 08:30:22.853212   10436 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 08:30:22.853232   10436 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 08:30:22.853801   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 08:30:22.853248   10436 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 08:30:22.853924   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 08:30:22.853957   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 08:30:22.853976   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.853275   10436 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 08:30:22.854042   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 08:30:22.854425   10436 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 08:30:22.854818   10436 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-982714"
	I1108 08:30:22.854908   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.854978   10436 addons.go:239] Setting addon default-storageclass=true in "addons-982714"
	I1108 08:30:22.855036   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:22.855184   10436 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 08:30:22.855617   10436 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 08:30:22.855909   10436 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 08:30:22.855985   10436 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 08:30:22.856281   10436 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 08:30:22.855942   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 08:30:22.855937   10436 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 08:30:22.856345   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 08:30:22.856567   10436 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:30:22.856577   10436 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 08:30:22.856579   10436 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 08:30:22.857230   10436 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 08:30:22.857246   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 08:30:22.858012   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 08:30:22.858048   10436 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 08:30:22.858060   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 08:30:22.858064   10436 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 08:30:22.858075   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 08:30:22.858906   10436 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:30:22.860042   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 08:30:22.860833   10436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 08:30:22.860853   10436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 08:30:22.861124   10436 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 08:30:22.861124   10436 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1108 08:30:22.862284   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 08:30:22.862382   10436 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 08:30:22.862399   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 08:30:22.863298   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.863478   10436 out.go:179]   - Using image docker.io/busybox:stable
	I1108 08:30:22.864566   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 08:30:22.864659   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.864737   10436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 08:30:22.864756   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 08:30:22.865154   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.865178   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.865187   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.865279   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.865542   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.865995   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.866060   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.866087   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.866646   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 08:30:22.866784   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.866811   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.866806   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.866879   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.866907   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.867144   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.867645   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.867798   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.867828   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.868083   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.868094   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.869085   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.869229   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.869980   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 08:30:22.870701   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.870734   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.870936   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.870964   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.870973   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.871448   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.871680   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.871723   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.871748   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.872091   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.872159   10436 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 08:30:22.872238   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.872265   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.872121   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.872667   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.872731   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.873017   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.873054   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.873249   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.873355   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.873407   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 08:30:22.873423   10436 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 08:30:22.873592   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.873621   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.873973   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.874220   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.874270   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.874310   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.874503   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.875040   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.875079   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.875272   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.875374   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.875998   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.876024   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.876152   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:22.876969   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.877268   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:22.877288   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:22.877416   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	W1108 08:30:23.076517   10436 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59626->192.168.39.224:22: read: connection reset by peer
	I1108 08:30:23.076554   10436 retry.go:31] will retry after 304.266276ms: ssh: handshake failed: read tcp 192.168.39.1:59626->192.168.39.224:22: read: connection reset by peer
	W1108 08:30:23.120258   10436 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59636->192.168.39.224:22: read: connection reset by peer
	I1108 08:30:23.120288   10436 retry.go:31] will retry after 237.671025ms: ssh: handshake failed: read tcp 192.168.39.1:59636->192.168.39.224:22: read: connection reset by peer
	W1108 08:30:23.120370   10436 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59652->192.168.39.224:22: read: connection reset by peer
	I1108 08:30:23.120382   10436 retry.go:31] will retry after 345.812781ms: ssh: handshake failed: read tcp 192.168.39.1:59652->192.168.39.224:22: read: connection reset by peer
	I1108 08:30:23.397484   10436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 08:30:23.397539   10436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 08:30:23.417845   10436 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 08:30:23.417874   10436 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 08:30:23.465689   10436 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 08:30:23.465711   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 08:30:23.474965   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 08:30:23.500243   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 08:30:23.514069   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 08:30:23.519229   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 08:30:23.650753   10436 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 08:30:23.650780   10436 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 08:30:23.693876   10436 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 08:30:23.693901   10436 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 08:30:23.714420   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 08:30:23.724129   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 08:30:23.750995   10436 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 08:30:23.751014   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 08:30:23.809646   10436 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 08:30:23.809674   10436 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 08:30:23.887772   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 08:30:23.901530   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 08:30:24.165640   10436 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 08:30:24.165663   10436 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 08:30:24.191009   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 08:30:24.199713   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 08:30:24.283471   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 08:30:24.352460   10436 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 08:30:24.352518   10436 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 08:30:24.630734   10436 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 08:30:24.630770   10436 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 08:30:24.657614   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 08:30:24.657640   10436 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 08:30:24.889602   10436 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 08:30:24.889632   10436 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 08:30:25.080665   10436 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 08:30:25.080696   10436 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 08:30:25.570485   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 08:30:25.570537   10436 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 08:30:25.581518   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 08:30:25.705332   10436 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 08:30:25.705359   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 08:30:25.835024   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 08:30:25.835054   10436 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 08:30:25.963919   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 08:30:25.963946   10436 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 08:30:26.156622   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 08:30:26.289989   10436 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:30:26.290011   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 08:30:26.354084   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 08:30:26.354106   10436 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 08:30:26.896023   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:30:26.951238   10436 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 08:30:26.951276   10436 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 08:30:27.211897   10436 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.814364846s)
	I1108 08:30:27.211912   10436 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.814324014s)
	I1108 08:30:27.211937   10436 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1108 08:30:27.212592   10436 node_ready.go:35] waiting up to 6m0s for node "addons-982714" to be "Ready" ...
	I1108 08:30:27.248088   10436 node_ready.go:49] node "addons-982714" is "Ready"
	I1108 08:30:27.248126   10436 node_ready.go:38] duration metric: took 35.507452ms for node "addons-982714" to be "Ready" ...
	I1108 08:30:27.248141   10436 api_server.go:52] waiting for apiserver process to appear ...
	I1108 08:30:27.248200   10436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:30:27.459195   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.984191462s)
	I1108 08:30:27.676617   10436 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 08:30:27.676648   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 08:30:27.721164   10436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-982714" context rescaled to 1 replicas
	I1108 08:30:28.201302   10436 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 08:30:28.201330   10436 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 08:30:28.507303   10436 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 08:30:28.507325   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 08:30:28.808018   10436 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 08:30:28.808040   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 08:30:29.048588   10436 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 08:30:29.048615   10436 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 08:30:29.349137   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 08:30:29.559641   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.059355733s)
	I1108 08:30:30.418447   10436 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 08:30:30.420732   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:30.421055   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:30.421076   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:30.421208   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:30.629923   10436 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 08:30:30.663883   10436 addons.go:239] Setting addon gcp-auth=true in "addons-982714"
	I1108 08:30:30.663933   10436 host.go:66] Checking if "addons-982714" exists ...
	I1108 08:30:30.665492   10436 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 08:30:30.667795   10436 main.go:143] libmachine: domain addons-982714 has defined MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:30.668167   10436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:84:e4:dc", ip: ""} in network mk-addons-982714: {Iface:virbr1 ExpiryTime:2025-11-08 09:29:55 +0000 UTC Type:0 Mac:52:54:00:84:e4:dc Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:addons-982714 Clientid:01:52:54:00:84:e4:dc}
	I1108 08:30:30.668190   10436 main.go:143] libmachine: domain addons-982714 has defined IP address 192.168.39.224 and MAC address 52:54:00:84:e4:dc in network mk-addons-982714
	I1108 08:30:30.668306   10436 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/addons-982714/id_rsa Username:docker}
	I1108 08:30:32.316110   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.796845491s)
	I1108 08:30:32.316160   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.601712775s)
	I1108 08:30:32.316270   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.592110535s)
	I1108 08:30:32.316305   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.428505869s)
	I1108 08:30:32.316360   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.414810819s)
	I1108 08:30:32.316391   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.125362067s)
	I1108 08:30:32.316425   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.116682475s)
	I1108 08:30:32.316443   10436 addons.go:480] Verifying addon registry=true in "addons-982714"
	I1108 08:30:32.316480   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.032983507s)
	I1108 08:30:32.316589   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.159941474s)
	I1108 08:30:32.316553   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.735006858s)
	I1108 08:30:32.316611   10436 addons.go:480] Verifying addon metrics-server=true in "addons-982714"
	I1108 08:30:32.316623   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.802510122s)
	I1108 08:30:32.316643   10436 addons.go:480] Verifying addon ingress=true in "addons-982714"
	I1108 08:30:32.318126   10436 out.go:179] * Verifying registry addon...
	I1108 08:30:32.318125   10436 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-982714 service yakd-dashboard -n yakd-dashboard
	
	I1108 08:30:32.318902   10436 out.go:179] * Verifying ingress addon...
	I1108 08:30:32.320145   10436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1108 08:30:32.321205   10436 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 08:30:32.416674   10436 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 08:30:32.416696   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:32.416859   10436 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 08:30:32.416876   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:32.471548   10436 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1108 08:30:32.903485   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:32.903959   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.124779   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.228703413s)
	W1108 08:30:33.124834   10436 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 08:30:33.124795   10436 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.87657234s)
	I1108 08:30:33.124883   10436 retry.go:31] will retry after 197.197452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 08:30:33.124909   10436 api_server.go:72] duration metric: took 10.280962166s to wait for apiserver process to appear ...
	I1108 08:30:33.124921   10436 api_server.go:88] waiting for apiserver healthz status ...
	I1108 08:30:33.124994   10436 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8443/healthz ...
	I1108 08:30:33.129877   10436 api_server.go:279] https://192.168.39.224:8443/healthz returned 200:
	ok
	I1108 08:30:33.131689   10436 api_server.go:141] control plane version: v1.34.1
	I1108 08:30:33.131717   10436 api_server.go:131] duration metric: took 6.737881ms to wait for apiserver health ...
	I1108 08:30:33.131729   10436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 08:30:33.144427   10436 system_pods.go:59] 15 kube-system pods found
	I1108 08:30:33.144461   10436 system_pods.go:61] "amd-gpu-device-plugin-9n6dq" [c3d6c069-5553-4b98-930e-eb7af77262c4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 08:30:33.144468   10436 system_pods.go:61] "coredns-66bc5c9577-7xmwf" [b805265c-a8d4-442a-b686-e089fd2dc935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:33.144475   10436 system_pods.go:61] "coredns-66bc5c9577-cd6rj" [989ed4e4-831c-4bd8-a5bb-7693cdeda506] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:33.144480   10436 system_pods.go:61] "etcd-addons-982714" [fe96e2b2-324a-404f-b1e6-c163f28de2f9] Running
	I1108 08:30:33.144484   10436 system_pods.go:61] "kube-apiserver-addons-982714" [b794e20b-f233-473c-8a4e-fc952fb8c0d9] Running
	I1108 08:30:33.144487   10436 system_pods.go:61] "kube-controller-manager-addons-982714" [b020c91f-1bc3-4dda-b16d-f64ea858c4bf] Running
	I1108 08:30:33.144492   10436 system_pods.go:61] "kube-ingress-dns-minikube" [d038ca99-376e-42b5-a2ec-2448f19fa561] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:33.144507   10436 system_pods.go:61] "kube-proxy-66s8n" [ac2d29e6-b58b-4ea7-b13c-4cd15436141b] Running
	I1108 08:30:33.144510   10436 system_pods.go:61] "kube-scheduler-addons-982714" [79da32cf-1636-4ba3-8f46-6148323c49ee] Running
	I1108 08:30:33.144515   10436 system_pods.go:61] "metrics-server-85b7d694d7-dsrgz" [60d165fd-f7eb-4b84-9108-14949a5300e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:33.144519   10436 system_pods.go:61] "nvidia-device-plugin-daemonset-9nlkp" [34beaf17-15b2-4f57-ad8f-fed0eb6775be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:33.144524   10436 system_pods.go:61] "registry-6b586f9694-drc2f" [aaf266db-6e91-4084-a296-d03377708fa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:33.144531   10436 system_pods.go:61] "registry-creds-764b6fb674-p5gh5" [a5d0fc99-54ab-4fbd-8e84-2a72617efc94] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:33.144536   10436 system_pods.go:61] "registry-proxy-f4pgf" [3938b218-293a-4387-b044-708a50497e10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 08:30:33.144541   10436 system_pods.go:61] "storage-provisioner" [17301402-30e8-42d9-8282-6cd102162642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:33.144547   10436 system_pods.go:74] duration metric: took 12.811994ms to wait for pod list to return data ...
	I1108 08:30:33.144558   10436 default_sa.go:34] waiting for default service account to be created ...
	I1108 08:30:33.190984   10436 default_sa.go:45] found service account: "default"
	I1108 08:30:33.191010   10436 default_sa.go:55] duration metric: took 46.446544ms for default service account to be created ...
	I1108 08:30:33.191019   10436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 08:30:33.237159   10436 system_pods.go:86] 17 kube-system pods found
	I1108 08:30:33.237191   10436 system_pods.go:89] "amd-gpu-device-plugin-9n6dq" [c3d6c069-5553-4b98-930e-eb7af77262c4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 08:30:33.237199   10436 system_pods.go:89] "coredns-66bc5c9577-7xmwf" [b805265c-a8d4-442a-b686-e089fd2dc935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:33.237207   10436 system_pods.go:89] "coredns-66bc5c9577-cd6rj" [989ed4e4-831c-4bd8-a5bb-7693cdeda506] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:33.237211   10436 system_pods.go:89] "etcd-addons-982714" [fe96e2b2-324a-404f-b1e6-c163f28de2f9] Running
	I1108 08:30:33.237215   10436 system_pods.go:89] "kube-apiserver-addons-982714" [b794e20b-f233-473c-8a4e-fc952fb8c0d9] Running
	I1108 08:30:33.237219   10436 system_pods.go:89] "kube-controller-manager-addons-982714" [b020c91f-1bc3-4dda-b16d-f64ea858c4bf] Running
	I1108 08:30:33.237227   10436 system_pods.go:89] "kube-ingress-dns-minikube" [d038ca99-376e-42b5-a2ec-2448f19fa561] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:33.237230   10436 system_pods.go:89] "kube-proxy-66s8n" [ac2d29e6-b58b-4ea7-b13c-4cd15436141b] Running
	I1108 08:30:33.237234   10436 system_pods.go:89] "kube-scheduler-addons-982714" [79da32cf-1636-4ba3-8f46-6148323c49ee] Running
	I1108 08:30:33.237238   10436 system_pods.go:89] "metrics-server-85b7d694d7-dsrgz" [60d165fd-f7eb-4b84-9108-14949a5300e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:33.237243   10436 system_pods.go:89] "nvidia-device-plugin-daemonset-9nlkp" [34beaf17-15b2-4f57-ad8f-fed0eb6775be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:33.237250   10436 system_pods.go:89] "registry-6b586f9694-drc2f" [aaf266db-6e91-4084-a296-d03377708fa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:33.237256   10436 system_pods.go:89] "registry-creds-764b6fb674-p5gh5" [a5d0fc99-54ab-4fbd-8e84-2a72617efc94] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:33.237260   10436 system_pods.go:89] "registry-proxy-f4pgf" [3938b218-293a-4387-b044-708a50497e10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 08:30:33.237263   10436 system_pods.go:89] "snapshot-controller-7d9fbc56b8-drlc6" [fea40d4a-d7f7-47e0-a297-534ca7882ca3] Pending
	I1108 08:30:33.237267   10436 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glfpp" [8818fa41-c528-4345-ac74-c417ead6d3ab] Pending
	I1108 08:30:33.237271   10436 system_pods.go:89] "storage-provisioner" [17301402-30e8-42d9-8282-6cd102162642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:33.237282   10436 system_pods.go:126] duration metric: took 46.25793ms to wait for k8s-apps to be running ...
	I1108 08:30:33.237289   10436 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 08:30:33.237330   10436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:30:33.322242   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:30:33.334770   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:33.334945   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.868147   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.873277   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:33.994747   10436 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.329214556s)
	I1108 08:30:33.994792   10436 system_svc.go:56] duration metric: took 757.493507ms WaitForService to wait for kubelet
	I1108 08:30:33.994816   10436 kubeadm.go:587] duration metric: took 11.150868822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 08:30:33.994839   10436 node_conditions.go:102] verifying NodePressure condition ...
	I1108 08:30:33.996132   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.64694327s)
	I1108 08:30:33.996158   10436 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-982714"
	I1108 08:30:33.996333   10436 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:30:33.997528   10436 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 08:30:33.998506   10436 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 08:30:33.999075   10436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 08:30:33.999736   10436 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 08:30:33.999752   10436 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 08:30:34.022234   10436 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 08:30:34.022267   10436 node_conditions.go:123] node cpu capacity is 2
	I1108 08:30:34.022279   10436 node_conditions.go:105] duration metric: took 27.358538ms to run NodePressure ...
	I1108 08:30:34.022290   10436 start.go:242] waiting for startup goroutines ...
	I1108 08:30:34.039976   10436 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 08:30:34.039994   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:34.122087   10436 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 08:30:34.122118   10436 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 08:30:34.255545   10436 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 08:30:34.255573   10436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 08:30:34.323099   10436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 08:30:34.329064   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:34.329831   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.506896   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:34.831237   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:34.831424   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.010973   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:35.330559   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.332162   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:35.513297   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:35.636792   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.314497177s)
	I1108 08:30:35.773215   10436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.450068917s)
	I1108 08:30:35.774484   10436 addons.go:480] Verifying addon gcp-auth=true in "addons-982714"
	I1108 08:30:35.776853   10436 out.go:179] * Verifying gcp-auth addon...
	I1108 08:30:35.779011   10436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 08:30:35.811311   10436 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 08:30:35.811332   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:35.859417   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.861603   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.008355   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:36.285865   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:36.330587   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:36.334535   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.506415   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:36.790900   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:36.896931   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:36.897072   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.002947   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:37.282787   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:37.323704   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:37.325146   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.504445   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:37.782390   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:37.826485   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.826560   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.003122   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:38.283613   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:38.324028   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.326101   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:38.502588   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:38.785574   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:38.823471   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.825180   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:39.003441   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:39.285917   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:39.324388   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.327362   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:39.509268   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:39.784407   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:39.830531   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:39.830620   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.007124   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:40.281866   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:40.328339   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:40.329860   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.507155   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:40.784803   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:40.829452   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.832785   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.006572   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:41.283422   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:41.325412   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:41.326136   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.502857   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:41.784616   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:41.828045   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:41.829191   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:42.002743   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:42.285484   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:42.325392   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:42.325436   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.506641   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:42.834215   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:42.835930   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.836030   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:43.006293   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:43.282949   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:43.323947   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:43.325332   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:43.504833   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:43.783337   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:43.823042   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:43.825396   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:44.003073   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:44.282676   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:44.325214   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:44.325600   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:44.504071   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:44.782459   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:44.823485   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:44.824792   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.003949   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:45.282034   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:45.324356   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:45.324482   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.505160   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:45.785967   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:45.826071   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:45.828539   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:46.002782   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:46.287449   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:46.324894   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:46.325988   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:46.503378   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:46.782936   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:46.824074   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:46.825333   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:47.005977   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:47.282186   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:47.324292   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:47.325458   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:47.503804   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:47.783843   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:47.823584   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:47.828966   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:48.097254   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:48.286217   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:48.327075   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:48.332856   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:48.505409   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:48.782778   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:48.824402   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:48.825448   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:49.003305   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:49.283899   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:49.326356   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:49.327363   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:49.503566   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:49.784886   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:49.827734   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:49.830917   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:50.007355   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:50.283948   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:50.324581   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:50.325401   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:50.503013   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:50.782867   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:50.823863   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:50.825154   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:51.002738   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:51.284200   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:51.323227   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:51.324174   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:51.502818   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:51.783908   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:51.824163   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:51.824835   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:52.004876   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:52.282734   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:52.323747   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:52.325201   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:52.503185   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:52.782193   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:52.825455   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:52.825618   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:53.005306   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:53.281839   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:53.323899   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:53.327292   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:53.505523   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:53.782979   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:53.826164   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:53.832160   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:54.005419   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:54.284084   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:54.328533   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:54.331412   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:54.503739   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:54.782623   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:54.832436   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:54.836802   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:55.174466   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:55.287065   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:55.327399   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:55.328719   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:55.503633   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:55.784927   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:55.826082   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:55.830095   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:56.004109   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:56.282389   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:56.323076   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:56.327995   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:56.506229   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:56.782908   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:56.825947   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:56.826736   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:57.003235   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:57.287084   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:57.324765   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:57.324900   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:57.503173   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:57.782232   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:57.830344   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:57.830655   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:58.003326   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:58.283066   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:58.327225   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:58.329474   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:58.506303   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:58.790256   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:58.890246   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:58.892245   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:59.005710   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:59.287701   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:59.327241   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:59.327388   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:59.510294   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:59.784053   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:59.830284   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:59.830396   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:00.006414   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:00.283466   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:00.325834   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:00.327208   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:00.503657   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:00.784032   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:00.825194   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:00.826930   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:01.004168   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:01.284092   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:01.328611   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:01.328902   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:01.505193   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:01.782454   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:01.824300   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:01.825639   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:02.005081   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:02.282236   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:02.324203   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:02.325839   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:02.503820   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:02.783394   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:02.824641   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:02.825271   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:03.003755   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:03.283819   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:03.330578   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:03.330641   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:03.504593   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:03.785570   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:03.826982   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:03.830529   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:04.167101   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:04.282990   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:04.325294   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:04.328025   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:04.506039   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:04.783356   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:04.826153   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:04.826247   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:05.003761   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:05.283486   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:05.324395   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:05.325216   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:05.502794   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:05.783783   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:05.824619   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:05.825118   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:06.003276   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:06.282414   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:06.324527   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:06.326033   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:06.502858   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:06.783492   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:06.884993   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:06.885126   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:07.002621   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:07.284053   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:07.325649   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:07.328047   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:07.507761   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:07.784149   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:07.824830   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:07.824849   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:08.006508   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:08.283393   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:08.326394   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:08.327873   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:08.503551   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:08.786458   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:08.824922   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:08.825445   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:09.003186   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:09.329789   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:09.329923   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:09.330487   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:09.504825   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:09.782306   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:09.825247   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:09.825279   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:10.003272   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:10.282622   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:10.325347   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:31:10.327890   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:10.503477   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:10.787410   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:10.826524   10436 kapi.go:107] duration metric: took 38.506352264s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 08:31:10.827336   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:11.004461   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:11.285378   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:11.324998   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:11.504047   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:11.783649   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:11.824648   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:12.003413   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:12.283140   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:12.325932   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:12.503137   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:12.782812   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:12.824961   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:13.002872   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:13.286866   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:13.325454   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:13.503648   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:13.810546   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:13.909085   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:14.003677   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:14.284090   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:14.327678   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:14.506143   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:14.783768   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:14.826907   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:15.006402   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:15.282927   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:15.325728   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:15.531737   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:15.843548   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:15.843741   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:16.003661   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:16.283724   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:16.325411   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:16.507262   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:16.786439   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:16.828907   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:17.006537   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:17.284477   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:17.325094   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:17.504951   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:17.784372   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:17.824917   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:18.004747   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:18.283843   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:18.326225   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:18.502677   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:18.789836   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:18.825524   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:19.003869   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:19.282377   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:19.325287   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:19.505757   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:19.785290   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:19.824937   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:20.002971   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:20.285110   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:20.326839   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:20.505417   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:20.782789   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:20.827697   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:21.005168   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:21.287116   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:21.327782   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:21.506155   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:21.891096   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:21.897192   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:22.002490   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:22.283096   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:22.325358   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:22.508174   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:22.782130   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:22.826050   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:23.004648   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:23.284238   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:23.326167   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:23.505002   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:23.784059   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:23.825753   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:24.004179   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:24.283003   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:24.325562   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:24.502678   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:24.783171   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:24.825299   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:25.003597   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:25.286182   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:25.327559   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:25.502820   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:25.786379   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:25.825481   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:26.002813   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:26.283299   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:26.324015   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:26.511681   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:26.787624   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:26.825677   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:27.005360   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:27.285206   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:27.325644   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:27.504385   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:27.783223   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:27.825653   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:28.041610   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:28.284235   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:28.325627   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:28.507950   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:28.783440   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:28.824553   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:29.007452   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:29.282599   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:29.325035   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:29.502901   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:29.785369   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:29.848578   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:30.002704   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:30.287443   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:30.390252   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:30.503961   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:30.782749   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:30.825516   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:31.021386   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:31.283600   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:31.326150   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:31.504354   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:31.782331   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:31.824231   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:32.003639   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:32.288107   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:32.325159   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:32.504385   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:32.783946   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:32.825457   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:33.004107   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:33.284610   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:33.327960   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:33.503418   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:33.783176   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:33.825617   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:34.008140   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:34.282890   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:34.327876   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:34.503125   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:34.782121   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:34.827651   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:35.003595   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:35.329095   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:35.329348   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:35.504159   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:35.790038   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:35.888400   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:36.004044   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:36.283771   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:36.327149   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:36.506238   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:36.783976   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:36.825788   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:37.014404   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:37.284809   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:37.385864   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:37.502923   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:37.786406   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:37.825441   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:38.006582   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:38.285716   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:38.327585   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:38.506885   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:38.784594   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:38.826566   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:39.005014   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:39.282555   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:39.327826   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:39.716098   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:39.829591   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:39.831223   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:40.004214   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:40.284376   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:40.324472   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:40.503571   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:40.783803   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:40.828788   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:41.003289   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:41.283527   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:41.326189   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:41.505765   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:41.786562   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:41.825110   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:42.005913   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:42.486131   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:42.488580   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:42.503579   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:42.784187   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:42.825908   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:43.005799   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:43.284255   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:43.328654   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:43.503361   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:43.782393   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:43.824170   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:44.003337   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:44.285782   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:44.328973   10436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:31:44.504589   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:44.783613   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:44.828668   10436 kapi.go:107] duration metric: took 1m12.507469653s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 08:31:45.004358   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:45.285367   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:45.503197   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:45.783254   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:46.059375   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:46.282816   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:46.503919   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:46.796941   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:47.004072   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:47.282345   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:47.504334   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:47.783116   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:48.002975   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:48.283368   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:48.504536   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:48.784036   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:49.004007   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:49.282080   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:49.504052   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:49.783294   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:50.003972   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:50.282572   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:50.503910   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:50.784474   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:51.003944   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:51.284084   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:51.506633   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:51.786032   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:52.004794   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:52.284423   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:31:52.506219   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:52.782917   10436 kapi.go:107] duration metric: took 1m17.003901135s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 08:31:52.784976   10436 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-982714 cluster.
	I1108 08:31:52.786073   10436 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 08:31:52.787335   10436 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 08:31:53.007223   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:53.504223   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:54.005527   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:54.503250   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:55.004087   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:55.503443   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:56.004241   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:56.653869   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:57.004558   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:57.502810   10436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:31:58.003821   10436 kapi.go:107] duration metric: took 1m24.004740497s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 08:31:58.005903   10436 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, amd-gpu-device-plugin, registry-creds, inspektor-gadget, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1108 08:31:58.007190   10436 addons.go:515] duration metric: took 1m35.163245362s for enable addons: enabled=[cloud-spanner ingress-dns amd-gpu-device-plugin registry-creds inspektor-gadget nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1108 08:31:58.007230   10436 start.go:247] waiting for cluster config update ...
	I1108 08:31:58.007247   10436 start.go:256] writing updated cluster config ...
	I1108 08:31:58.007488   10436 ssh_runner.go:195] Run: rm -f paused
	I1108 08:31:58.014525   10436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 08:31:58.019597   10436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7xmwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.024578   10436 pod_ready.go:94] pod "coredns-66bc5c9577-7xmwf" is "Ready"
	I1108 08:31:58.024605   10436 pod_ready.go:86] duration metric: took 4.986691ms for pod "coredns-66bc5c9577-7xmwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.027333   10436 pod_ready.go:83] waiting for pod "etcd-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.032485   10436 pod_ready.go:94] pod "etcd-addons-982714" is "Ready"
	I1108 08:31:58.032519   10436 pod_ready.go:86] duration metric: took 5.1693ms for pod "etcd-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.034642   10436 pod_ready.go:83] waiting for pod "kube-apiserver-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.039791   10436 pod_ready.go:94] pod "kube-apiserver-addons-982714" is "Ready"
	I1108 08:31:58.039808   10436 pod_ready.go:86] duration metric: took 5.148171ms for pod "kube-apiserver-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.041620   10436 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.419272   10436 pod_ready.go:94] pod "kube-controller-manager-addons-982714" is "Ready"
	I1108 08:31:58.419297   10436 pod_ready.go:86] duration metric: took 377.659678ms for pod "kube-controller-manager-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:58.620250   10436 pod_ready.go:83] waiting for pod "kube-proxy-66s8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:59.020359   10436 pod_ready.go:94] pod "kube-proxy-66s8n" is "Ready"
	I1108 08:31:59.020394   10436 pod_ready.go:86] duration metric: took 400.11418ms for pod "kube-proxy-66s8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:59.220526   10436 pod_ready.go:83] waiting for pod "kube-scheduler-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:59.620320   10436 pod_ready.go:94] pod "kube-scheduler-addons-982714" is "Ready"
	I1108 08:31:59.620350   10436 pod_ready.go:86] duration metric: took 399.800293ms for pod "kube-scheduler-addons-982714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:59.620361   10436 pod_ready.go:40] duration metric: took 1.605812117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 08:31:59.663298   10436 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 08:31:59.665048   10436 out.go:179] * Done! kubectl is now configured to use "addons-982714" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.680078833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762590905680046903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81cc6c3a-6963-423f-9cca-5b93b277da3c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.683445952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41423648-683c-4f79-8b19-f55b3164f277 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.683559297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41423648-683c-4f79-8b19-f55b3164f277 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.683942986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bca691b19dbb9672cba234e1e0f2fd02c2f5e0c507fa6bff325977e96f9afe7f,PodSandboxId:c141ad74a307a886a7ab8ca00ed2a38138cee63c1c059cb3a9cbfd3679c7c812,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762590763883689115,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f5c0b83-bcbf-47ea-aed4-d32a51f5b988,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a8f4aba84eb6c3874b269bb8cbc96f9a1aa27124edebf8d04ca2a91d391e14,PodSandboxId:0e6977053a990e15e1e8195d1e22128f2cfdf9284e239f19548099e0a1e06b86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762590724656919463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37c39924-9493-4bce-a21a-b32309dde4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6b5e4b45c5cc8e3514a0abc153564eb38acdd627689fe18f0ae9ff2b43f62e,PodSandboxId:cfe06d851efe20245e0d16de5d2b1b89eb00a52f371cf13c24445a578f663a27,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762590704285689884,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gwgxb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 417b874a-1fb4-438e-9749-07b90444023c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6ca88fc91f080ee1b4175002fdcb49f970b63d52edccbb6039da8c76a7f822fd,PodSandboxId:bd193cf4b834dd0faec56158943ce00670258c501f282f4c2d4be88374d2c515,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1762590704165129151,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vkw9k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1bffca3a-f214-4c35-b6ee-2e7ae4927e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b6b8e6e7a37abd85bf150bdcbd36fa418249979c2db4f52870ba968943cf3f,PodSandboxId:33ad4ec8859409faeb30d9d2c21553a4aa09bb79149f932fc4c16ab1b0e9a5a6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762590692268515991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-szcsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81d082be-c5d5-414a-a719-e1897d981bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d024ab77fc4d2bb6315dfca863b52759c71d07ad4d4ac8bca524df4c5cd3665a,PodSandboxId:e1f70622fcff3a211d5fd5a57e3bd8730e7f37bd8990ebc769367f0d5fee042c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762590660653004704,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d038ca99-376e-42b5-a2ec-2448f19fa561,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e06f9a42bf5953501faac69d3d07f8bcbb87c47b7b3181b8380d779de9b22f,PodSandboxId:2986d6189769ed5845d12fd9d1ce9c856541e6d944397944f0175d4fee6b5305,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762590635075350562,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9n6dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d6c069-5553-4b98-930e-eb7af77262c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa24655f68058c52f5f786b5b8b5a165b739d74c9c9b9287255cfa9edd23528,PodSandboxId:225acbb7c82603de915ec6055263733963583f2aa227f881e9adce1d0c4a8852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762590633529904913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17301402-30e8-42d9-8282-6cd102162642,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8451427da03df0621e465a9c0183c67b85b58b9d18542a341a015640c1bb6,PodSandboxId:8b6b017013eade258d897dd35b166abee7b6adef14ace9421328337fae3f8292,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762590625216631846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7xmwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b805265c-a8d4-442a-b686-e089fd2dc935,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1793c995538db844562758b54777896733bb735cf41997c6aa1e9bced26f79a,PodSandboxId:b3686efa28b96fe2d4c37149cd83460441b802c0eb2b7e77f3bcf6cd7358f52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762590624550494681,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66s8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d29e6-b58b-4ea7-b13c-4cd15436141b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160870965acef5fac4a56117e9badd30e3fe594dfe20290d5264966301884d05,PodSandboxId:937132ea964268283197c6e345a935fea3faea57d6e4d760f7a57007dbd04d38,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762590611684463347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4f8216f9a9dcbe9e0e4d8c056b4e038,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fec1ef4b2121724cf5c6f1a9fe95bfddb3f1832c04930cabf4d817b047d3f3,PodSandboxId:45474777e788fd6883b5602f529cb1be25680d19b239c88d07d39e7797dff9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762590611717528752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be19292ef154facd2f1441d3e822a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"contai
nerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca7ac470524d0ba7a5f9a8b6d7d21648fda68206fc112cb7e8d8bd0be1757b5,PodSandboxId:76a567e96e4b9eab750359975a0bd5c28446f971b4fa281e340cca598a473431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762590611668226434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006cb5a2e935980b8345043649fa96f6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212abbfc8a025e4c4d3e3c5a80a44a8396f2b47e6380099c6f940ca523bdd444,PodSandboxId:666e248d3e5bb709a73110e8b33c1093dd76afb92988dc8dc767de2f0c799e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762590611632361622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-982714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: e1143f8d2ea63ea14a28cf63e72da85f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41423648-683c-4f79-8b19-f55b3164f277 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.728756498Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c653c48-529d-4e67-a0f8-f3877f6becf0 name=/runtime.v1.RuntimeService/Version
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.729142831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c653c48-529d-4e67-a0f8-f3877f6becf0 name=/runtime.v1.RuntimeService/Version
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.730969367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a0f4a9c-7569-4013-8579-185cc7360ada name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.732305907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762590905732271805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a0f4a9c-7569-4013-8579-185cc7360ada name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.733017229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78ec69b8-0c8a-420a-9ea3-8bba36d293eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.733097316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78ec69b8-0c8a-420a-9ea3-8bba36d293eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.733677398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bca691b19dbb9672cba234e1e0f2fd02c2f5e0c507fa6bff325977e96f9afe7f,PodSandboxId:c141ad74a307a886a7ab8ca00ed2a38138cee63c1c059cb3a9cbfd3679c7c812,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762590763883689115,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f5c0b83-bcbf-47ea-aed4-d32a51f5b988,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a8f4aba84eb6c3874b269bb8cbc96f9a1aa27124edebf8d04ca2a91d391e14,PodSandboxId:0e6977053a990e15e1e8195d1e22128f2cfdf9284e239f19548099e0a1e06b86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762590724656919463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37c39924-9493-4bce-a21a-b32309dde4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6b5e4b45c5cc8e3514a0abc153564eb38acdd627689fe18f0ae9ff2b43f62e,PodSandboxId:cfe06d851efe20245e0d16de5d2b1b89eb00a52f371cf13c24445a578f663a27,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762590704285689884,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gwgxb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 417b874a-1fb4-438e-9749-07b90444023c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6ca88fc91f080ee1b4175002fdcb49f970b63d52edccbb6039da8c76a7f822fd,PodSandboxId:bd193cf4b834dd0faec56158943ce00670258c501f282f4c2d4be88374d2c515,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1762590704165129151,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vkw9k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1bffca3a-f214-4c35-b6ee-2e7ae4927e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b6b8e6e7a37abd85bf150bdcbd36fa418249979c2db4f52870ba968943cf3f,PodSandboxId:33ad4ec8859409faeb30d9d2c21553a4aa09bb79149f932fc4c16ab1b0e9a5a6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762590692268515991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-szcsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81d082be-c5d5-414a-a719-e1897d981bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d024ab77fc4d2bb6315dfca863b52759c71d07ad4d4ac8bca524df4c5cd3665a,PodSandboxId:e1f70622fcff3a211d5fd5a57e3bd8730e7f37bd8990ebc769367f0d5fee042c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762590660653004704,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d038ca99-376e-42b5-a2ec-2448f19fa561,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e06f9a42bf5953501faac69d3d07f8bcbb87c47b7b3181b8380d779de9b22f,PodSandboxId:2986d6189769ed5845d12fd9d1ce9c856541e6d944397944f0175d4fee6b5305,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762590635075350562,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9n6dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d6c069-5553-4b98-930e-eb7af77262c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa24655f68058c52f5f786b5b8b5a165b739d74c9c9b9287255cfa9edd23528,PodSandboxId:225acbb7c82603de915ec6055263733963583f2aa227f881e9adce1d0c4a8852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762590633529904913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17301402-30e8-42d9-8282-6cd102162642,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8451427da03df0621e465a9c0183c67b85b58b9d18542a341a015640c1bb6,PodSandboxId:8b6b017013eade258d897dd35b166abee7b6adef14ace9421328337fae3f8292,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762590625216631846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7xmwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b805265c-a8d4-442a-b686-e089fd2dc935,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1793c995538db844562758b54777896733bb735cf41997c6aa1e9bced26f79a,PodSandboxId:b3686efa28b96fe2d4c37149cd83460441b802c0eb2b7e77f3bcf6cd7358f52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762590624550494681,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66s8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d29e6-b58b-4ea7-b13c-4cd15436141b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160870965acef5fac4a56117e9badd30e3fe594dfe20290d5264966301884d05,PodSandboxId:937132ea964268283197c6e345a935fea3faea57d6e4d760f7a57007dbd04d38,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762590611684463347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4f8216f9a9dcbe9e0e4d8c056b4e038,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fec1ef4b2121724cf5c6f1a9fe95bfddb3f1832c04930cabf4d817b047d3f3,PodSandboxId:45474777e788fd6883b5602f529cb1be25680d19b239c88d07d39e7797dff9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762590611717528752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be19292ef154facd2f1441d3e822a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"contai
nerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca7ac470524d0ba7a5f9a8b6d7d21648fda68206fc112cb7e8d8bd0be1757b5,PodSandboxId:76a567e96e4b9eab750359975a0bd5c28446f971b4fa281e340cca598a473431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762590611668226434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006cb5a2e935980b8345043649fa96f6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212abbfc8a025e4c4d3e3c5a80a44a8396f2b47e6380099c6f940ca523bdd444,PodSandboxId:666e248d3e5bb709a73110e8b33c1093dd76afb92988dc8dc767de2f0c799e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762590611632361622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-982714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: e1143f8d2ea63ea14a28cf63e72da85f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78ec69b8-0c8a-420a-9ea3-8bba36d293eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.770378681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6da19e9e-a3d8-477f-8879-0cd4de77f1c6 name=/runtime.v1.RuntimeService/Version
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.770459134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6da19e9e-a3d8-477f-8879-0cd4de77f1c6 name=/runtime.v1.RuntimeService/Version
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.773487464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50b8354c-80a6-4ad4-8582-29a40826cb99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.775135180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762590905775095062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50b8354c-80a6-4ad4-8582-29a40826cb99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.775986647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac624342-d836-4c00-bd37-39c590a9efdc name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.776282248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac624342-d836-4c00-bd37-39c590a9efdc name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.776751837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bca691b19dbb9672cba234e1e0f2fd02c2f5e0c507fa6bff325977e96f9afe7f,PodSandboxId:c141ad74a307a886a7ab8ca00ed2a38138cee63c1c059cb3a9cbfd3679c7c812,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762590763883689115,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f5c0b83-bcbf-47ea-aed4-d32a51f5b988,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a8f4aba84eb6c3874b269bb8cbc96f9a1aa27124edebf8d04ca2a91d391e14,PodSandboxId:0e6977053a990e15e1e8195d1e22128f2cfdf9284e239f19548099e0a1e06b86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762590724656919463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37c39924-9493-4bce-a21a-b32309dde4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6b5e4b45c5cc8e3514a0abc153564eb38acdd627689fe18f0ae9ff2b43f62e,PodSandboxId:cfe06d851efe20245e0d16de5d2b1b89eb00a52f371cf13c24445a578f663a27,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762590704285689884,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gwgxb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 417b874a-1fb4-438e-9749-07b90444023c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6ca88fc91f080ee1b4175002fdcb49f970b63d52edccbb6039da8c76a7f822fd,PodSandboxId:bd193cf4b834dd0faec56158943ce00670258c501f282f4c2d4be88374d2c515,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1762590704165129151,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vkw9k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1bffca3a-f214-4c35-b6ee-2e7ae4927e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b6b8e6e7a37abd85bf150bdcbd36fa418249979c2db4f52870ba968943cf3f,PodSandboxId:33ad4ec8859409faeb30d9d2c21553a4aa09bb79149f932fc4c16ab1b0e9a5a6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762590692268515991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-szcsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81d082be-c5d5-414a-a719-e1897d981bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d024ab77fc4d2bb6315dfca863b52759c71d07ad4d4ac8bca524df4c5cd3665a,PodSandboxId:e1f70622fcff3a211d5fd5a57e3bd8730e7f37bd8990ebc769367f0d5fee042c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762590660653004704,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d038ca99-376e-42b5-a2ec-2448f19fa561,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e06f9a42bf5953501faac69d3d07f8bcbb87c47b7b3181b8380d779de9b22f,PodSandboxId:2986d6189769ed5845d12fd9d1ce9c856541e6d944397944f0175d4fee6b5305,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762590635075350562,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9n6dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d6c069-5553-4b98-930e-eb7af77262c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa24655f68058c52f5f786b5b8b5a165b739d74c9c9b9287255cfa9edd23528,PodSandboxId:225acbb7c82603de915ec6055263733963583f2aa227f881e9adce1d0c4a8852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762590633529904913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17301402-30e8-42d9-8282-6cd102162642,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8451427da03df0621e465a9c0183c67b85b58b9d18542a341a015640c1bb6,PodSandboxId:8b6b017013eade258d897dd35b166abee7b6adef14ace9421328337fae3f8292,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762590625216631846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7xmwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b805265c-a8d4-442a-b686-e089fd2dc935,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1793c995538db844562758b54777896733bb735cf41997c6aa1e9bced26f79a,PodSandboxId:b3686efa28b96fe2d4c37149cd83460441b802c0eb2b7e77f3bcf6cd7358f52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762590624550494681,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66s8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d29e6-b58b-4ea7-b13c-4cd15436141b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160870965acef5fac4a56117e9badd30e3fe594dfe20290d5264966301884d05,PodSandboxId:937132ea964268283197c6e345a935fea3faea57d6e4d760f7a57007dbd04d38,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762590611684463347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4f8216f9a9dcbe9e0e4d8c056b4e038,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fec1ef4b2121724cf5c6f1a9fe95bfddb3f1832c04930cabf4d817b047d3f3,PodSandboxId:45474777e788fd6883b5602f529cb1be25680d19b239c88d07d39e7797dff9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762590611717528752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be19292ef154facd2f1441d3e822a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"contai
nerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca7ac470524d0ba7a5f9a8b6d7d21648fda68206fc112cb7e8d8bd0be1757b5,PodSandboxId:76a567e96e4b9eab750359975a0bd5c28446f971b4fa281e340cca598a473431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762590611668226434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006cb5a2e935980b8345043649fa96f6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212abbfc8a025e4c4d3e3c5a80a44a8396f2b47e6380099c6f940ca523bdd444,PodSandboxId:666e248d3e5bb709a73110e8b33c1093dd76afb92988dc8dc767de2f0c799e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762590611632361622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-982714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: e1143f8d2ea63ea14a28cf63e72da85f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac624342-d836-4c00-bd37-39c590a9efdc name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.820184884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=187c814f-03af-4349-9f87-bdff9d3de6e0 name=/runtime.v1.RuntimeService/Version
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.820460082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=187c814f-03af-4349-9f87-bdff9d3de6e0 name=/runtime.v1.RuntimeService/Version
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.821918059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25f4c9c0-2ea9-484c-96a5-6c259cd73073 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.823180750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762590905823154221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25f4c9c0-2ea9-484c-96a5-6c259cd73073 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.824272870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=076f3d19-88d4-45d5-919d-7ef21773f6e1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.824402454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=076f3d19-88d4-45d5-919d-7ef21773f6e1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 08:35:05 addons-982714 crio[812]: time="2025-11-08 08:35:05.824765684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bca691b19dbb9672cba234e1e0f2fd02c2f5e0c507fa6bff325977e96f9afe7f,PodSandboxId:c141ad74a307a886a7ab8ca00ed2a38138cee63c1c059cb3a9cbfd3679c7c812,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762590763883689115,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f5c0b83-bcbf-47ea-aed4-d32a51f5b988,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a8f4aba84eb6c3874b269bb8cbc96f9a1aa27124edebf8d04ca2a91d391e14,PodSandboxId:0e6977053a990e15e1e8195d1e22128f2cfdf9284e239f19548099e0a1e06b86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762590724656919463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37c39924-9493-4bce-a21a-b32309dde4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6b5e4b45c5cc8e3514a0abc153564eb38acdd627689fe18f0ae9ff2b43f62e,PodSandboxId:cfe06d851efe20245e0d16de5d2b1b89eb00a52f371cf13c24445a578f663a27,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762590704285689884,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gwgxb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 417b874a-1fb4-438e-9749-07b90444023c,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6ca88fc91f080ee1b4175002fdcb49f970b63d52edccbb6039da8c76a7f822fd,PodSandboxId:bd193cf4b834dd0faec56158943ce00670258c501f282f4c2d4be88374d2c515,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1762590704165129151,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vkw9k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1bffca3a-f214-4c35-b6ee-2e7ae4927e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b6b8e6e7a37abd85bf150bdcbd36fa418249979c2db4f52870ba968943cf3f,PodSandboxId:33ad4ec8859409faeb30d9d2c21553a4aa09bb79149f932fc4c16ab1b0e9a5a6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762590692268515991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-szcsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81d082be-c5d5-414a-a719-e1897d981bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d024ab77fc4d2bb6315dfca863b52759c71d07ad4d4ac8bca524df4c5cd3665a,PodSandboxId:e1f70622fcff3a211d5fd5a57e3bd8730e7f37bd8990ebc769367f0d5fee042c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762590660653004704,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d038ca99-376e-42b5-a2ec-2448f19fa561,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e06f9a42bf5953501faac69d3d07f8bcbb87c47b7b3181b8380d779de9b22f,PodSandboxId:2986d6189769ed5845d12fd9d1ce9c856541e6d944397944f0175d4fee6b5305,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762590635075350562,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9n6dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d6c069-5553-4b98-930e-eb7af77262c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa24655f68058c52f5f786b5b8b5a165b739d74c9c9b9287255cfa9edd23528,PodSandboxId:225acbb7c82603de915ec6055263733963583f2aa227f881e9adce1d0c4a8852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762590633529904913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17301402-30e8-42d9-8282-6cd102162642,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8451427da03df0621e465a9c0183c67b85b58b9d18542a341a015640c1bb6,PodSandboxId:8b6b017013eade258d897dd35b166abee7b6adef14ace9421328337fae3f8292,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762590625216631846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7xmwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b805265c-a8d4-442a-b686-e089fd2dc935,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1793c995538db844562758b54777896733bb735cf41997c6aa1e9bced26f79a,PodSandboxId:b3686efa28b96fe2d4c37149cd83460441b802c0eb2b7e77f3bcf6cd7358f52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762590624550494681,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66s8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d29e6-b58b-4ea7-b13c-4cd15436141b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160870965acef5fac4a56117e9badd30e3fe594dfe20290d5264966301884d05,PodSandboxId:937132ea964268283197c6e345a935fea3faea57d6e4d760f7a57007dbd04d38,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762590611684463347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4f8216f9a9dcbe9e0e4d8c056b4e038,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fec1ef4b2121724cf5c6f1a9fe95bfddb3f1832c04930cabf4d817b047d3f3,PodSandboxId:45474777e788fd6883b5602f529cb1be25680d19b239c88d07d39e7797dff9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762590611717528752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be19292ef154facd2f1441d3e822a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"contai
nerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca7ac470524d0ba7a5f9a8b6d7d21648fda68206fc112cb7e8d8bd0be1757b5,PodSandboxId:76a567e96e4b9eab750359975a0bd5c28446f971b4fa281e340cca598a473431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762590611668226434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-982714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006cb5a2e935980b8345043649fa96f6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212abbfc8a025e4c4d3e3c5a80a44a8396f2b47e6380099c6f940ca523bdd444,PodSandboxId:666e248d3e5bb709a73110e8b33c1093dd76afb92988dc8dc767de2f0c799e6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762590611632361622,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-982714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: e1143f8d2ea63ea14a28cf63e72da85f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=076f3d19-88d4-45d5-919d-7ef21773f6e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bca691b19dbb9       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   c141ad74a307a       nginx
	33a8f4aba84eb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   0e6977053a990       busybox
	5e6b5e4b45c5c       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   cfe06d851efe2       ingress-nginx-controller-675c5ddd98-gwgxb
	6ca88fc91f080       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             3 minutes ago       Exited              patch                     2                   bd193cf4b834d       ingress-nginx-admission-patch-vkw9k
	36b6b8e6e7a37       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              create                    0                   33ad4ec885940       ingress-nginx-admission-create-szcsf
	d024ab77fc4d2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   e1f70622fcff3       kube-ingress-dns-minikube
	41e06f9a42bf5       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   2986d6189769e       amd-gpu-device-plugin-9n6dq
	caa24655f6805       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   225acbb7c8260       storage-provisioner
	67b8451427da0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   8b6b017013ead       coredns-66bc5c9577-7xmwf
	a1793c995538d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   b3686efa28b96       kube-proxy-66s8n
	e7fec1ef4b212       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   45474777e788f       kube-controller-manager-addons-982714
	160870965acef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   937132ea96426       etcd-addons-982714
	dca7ac470524d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   76a567e96e4b9       kube-apiserver-addons-982714
	212abbfc8a025       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   666e248d3e5bb       kube-scheduler-addons-982714
	
	
	==> coredns [67b8451427da03df0621e465a9c0183c67b85b58b9d18542a341a015640c1bb6] <==
	[INFO] 10.244.0.8:47774 - 20399 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000142445s
	[INFO] 10.244.0.8:47774 - 39328 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000110325s
	[INFO] 10.244.0.8:47774 - 537 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000332965s
	[INFO] 10.244.0.8:47774 - 40964 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085863s
	[INFO] 10.244.0.8:47774 - 44779 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000298829s
	[INFO] 10.244.0.8:47774 - 24195 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116176s
	[INFO] 10.244.0.8:47774 - 64536 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00032455s
	[INFO] 10.244.0.8:45905 - 10882 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000178334s
	[INFO] 10.244.0.8:45905 - 10624 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000195646s
	[INFO] 10.244.0.8:35381 - 64228 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011616s
	[INFO] 10.244.0.8:35381 - 64480 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067255s
	[INFO] 10.244.0.8:58134 - 24824 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000209537s
	[INFO] 10.244.0.8:58134 - 25051 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109486s
	[INFO] 10.244.0.8:42812 - 47680 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072227s
	[INFO] 10.244.0.8:42812 - 47900 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151817s
	[INFO] 10.244.0.23:59925 - 8273 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000458226s
	[INFO] 10.244.0.23:48414 - 54329 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0007891s
	[INFO] 10.244.0.23:35624 - 15957 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111574s
	[INFO] 10.244.0.23:35248 - 36396 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00049283s
	[INFO] 10.244.0.23:53268 - 5033 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091084s
	[INFO] 10.244.0.23:40774 - 45710 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000497725s
	[INFO] 10.244.0.23:55301 - 58197 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001623961s
	[INFO] 10.244.0.23:52152 - 15121 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00556597s
	[INFO] 10.244.0.27:33024 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000396812s
	[INFO] 10.244.0.27:60771 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000274621s
	
	
	==> describe nodes <==
	Name:               addons-982714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-982714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=addons-982714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T08_30_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-982714
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 08:30:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-982714
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 08:35:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 08:33:22 +0000   Sat, 08 Nov 2025 08:30:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 08:33:22 +0000   Sat, 08 Nov 2025 08:30:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 08:33:22 +0000   Sat, 08 Nov 2025 08:30:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 08:33:22 +0000   Sat, 08 Nov 2025 08:30:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    addons-982714
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa06b1317f1e49db98b524ce9a4473c2
	  System UUID:                fa06b131-7f1e-49db-98b5-24ce9a4473c2
	  Boot ID:                    6fb16c29-793a-479b-a7b4-fc4254443e5f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-world-app-5d498dc89-hgnfk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gwgxb    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m35s
	  kube-system                 amd-gpu-device-plugin-9n6dq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-66bc5c9577-7xmwf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m43s
	  kube-system                 etcd-addons-982714                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m50s
	  kube-system                 kube-apiserver-addons-982714                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-controller-manager-addons-982714        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-proxy-66s8n                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-scheduler-addons-982714                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m39s  kube-proxy       
	  Normal  Starting                 4m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s  kubelet          Node addons-982714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s  kubelet          Node addons-982714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s  kubelet          Node addons-982714 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m48s  kubelet          Node addons-982714 status is now: NodeReady
	  Normal  RegisteredNode           4m44s  node-controller  Node addons-982714 event: Registered Node addons-982714 in Controller
	
	
	==> dmesg <==
	[  +8.246768] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 8 08:31] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.700959] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.546498] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.030269] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.316613] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.433080] kauditd_printk_skb: 72 callbacks suppressed
	[  +2.251512] kauditd_printk_skb: 177 callbacks suppressed
	[  +4.648655] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.396842] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.733242] kauditd_printk_skb: 62 callbacks suppressed
	[Nov 8 08:32] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.036068] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.974259] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.956398] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.603049] kauditd_printk_skb: 170 callbacks suppressed
	[  +0.807880] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.050616] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.947433] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.283602] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 8 08:33] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.856218] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 8 08:35] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [160870965acef5fac4a56117e9badd30e3fe594dfe20290d5264966301884d05] <==
	{"level":"warn","ts":"2025-11-08T08:31:39.699473Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.251584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:31:39.699487Z","caller":"traceutil/trace.go:172","msg":"trace[1828293332] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1107; }","duration":"141.267344ms","start":"2025-11-08T08:31:39.558215Z","end":"2025-11-08T08:31:39.699482Z","steps":["trace[1828293332] 'agreement among raft nodes before linearized reading'  (duration: 141.241962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:31:39.699559Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.879608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:31:39.699571Z","caller":"traceutil/trace.go:172","msg":"trace[1393784998] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1107; }","duration":"154.891638ms","start":"2025-11-08T08:31:39.544675Z","end":"2025-11-08T08:31:39.699567Z","steps":["trace[1393784998] 'agreement among raft nodes before linearized reading'  (duration: 154.867082ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:31:42.478620Z","caller":"traceutil/trace.go:172","msg":"trace[136418295] linearizableReadLoop","detail":"{readStateIndex:1144; appliedIndex:1144; }","duration":"201.490009ms","start":"2025-11-08T08:31:42.277114Z","end":"2025-11-08T08:31:42.478604Z","steps":["trace[136418295] 'read index received'  (duration: 201.485033ms)","trace[136418295] 'applied index is now lower than readState.Index'  (duration: 3.984µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:31:42.479018Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.886653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:31:42.479855Z","caller":"traceutil/trace.go:172","msg":"trace[1835522963] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"202.735241ms","start":"2025-11-08T08:31:42.277110Z","end":"2025-11-08T08:31:42.479845Z","steps":["trace[1835522963] 'agreement among raft nodes before linearized reading'  (duration: 201.857856ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:31:42.479194Z","caller":"traceutil/trace.go:172","msg":"trace[996929518] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"303.625356ms","start":"2025-11-08T08:31:42.175561Z","end":"2025-11-08T08:31:42.479186Z","steps":["trace[996929518] 'process raft request'  (duration: 303.300051ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:31:42.480705Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T08:31:42.175541Z","time spent":"305.110218ms","remote":"127.0.0.1:39070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1108 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-08T08:31:42.479353Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.948985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:31:42.480980Z","caller":"traceutil/trace.go:172","msg":"trace[366454829] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"161.620621ms","start":"2025-11-08T08:31:42.319351Z","end":"2025-11-08T08:31:42.480972Z","steps":["trace[366454829] 'agreement among raft nodes before linearized reading'  (duration: 159.920704ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:31:56.642383Z","caller":"traceutil/trace.go:172","msg":"trace[18644385] linearizableReadLoop","detail":"{readStateIndex:1224; appliedIndex:1224; }","duration":"142.869404ms","start":"2025-11-08T08:31:56.499489Z","end":"2025-11-08T08:31:56.642359Z","steps":["trace[18644385] 'read index received'  (duration: 142.86405ms)","trace[18644385] 'applied index is now lower than readState.Index'  (duration: 4.426µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:31:56.643109Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.592136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:31:56.643328Z","caller":"traceutil/trace.go:172","msg":"trace[807841360] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1188; }","duration":"143.851646ms","start":"2025-11-08T08:31:56.499466Z","end":"2025-11-08T08:31:56.643318Z","steps":["trace[807841360] 'agreement among raft nodes before linearized reading'  (duration: 143.491281ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:26.096644Z","caller":"traceutil/trace.go:172","msg":"trace[835865383] linearizableReadLoop","detail":"{readStateIndex:1399; appliedIndex:1399; }","duration":"116.100745ms","start":"2025-11-08T08:32:25.980474Z","end":"2025-11-08T08:32:26.096575Z","steps":["trace[835865383] 'read index received'  (duration: 116.095046ms)","trace[835865383] 'applied index is now lower than readState.Index'  (duration: 4.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:32:26.098961Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.456992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:32:26.099004Z","caller":"traceutil/trace.go:172","msg":"trace[1076838090] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1356; }","duration":"118.526329ms","start":"2025-11-08T08:32:25.980470Z","end":"2025-11-08T08:32:26.098996Z","steps":["trace[1076838090] 'agreement among raft nodes before linearized reading'  (duration: 116.395148ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:26.099144Z","caller":"traceutil/trace.go:172","msg":"trace[901202571] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1357; }","duration":"131.408203ms","start":"2025-11-08T08:32:25.967728Z","end":"2025-11-08T08:32:26.099136Z","steps":["trace[901202571] 'process raft request'  (duration: 129.550119ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:26.100275Z","caller":"traceutil/trace.go:172","msg":"trace[650860181] transaction","detail":"{read_only:false; response_revision:1360; number_of_response:1; }","duration":"109.372539ms","start":"2025-11-08T08:32:25.990893Z","end":"2025-11-08T08:32:26.100266Z","steps":["trace[650860181] 'process raft request'  (duration: 109.148246ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:26.100789Z","caller":"traceutil/trace.go:172","msg":"trace[749763765] transaction","detail":"{read_only:false; response_revision:1358; number_of_response:1; }","duration":"128.893894ms","start":"2025-11-08T08:32:25.971886Z","end":"2025-11-08T08:32:26.100780Z","steps":["trace[749763765] 'process raft request'  (duration: 128.079007ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:26.100927Z","caller":"traceutil/trace.go:172","msg":"trace[66842617] transaction","detail":"{read_only:false; response_revision:1359; number_of_response:1; }","duration":"128.88998ms","start":"2025-11-08T08:32:25.972029Z","end":"2025-11-08T08:32:26.100919Z","steps":["trace[66842617] 'process raft request'  (duration: 127.981966ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:37.204049Z","caller":"traceutil/trace.go:172","msg":"trace[416509695] transaction","detail":"{read_only:false; response_revision:1476; number_of_response:1; }","duration":"156.891861ms","start":"2025-11-08T08:32:37.047143Z","end":"2025-11-08T08:32:37.204035Z","steps":["trace[416509695] 'process raft request'  (duration: 156.783955ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:53.493242Z","caller":"traceutil/trace.go:172","msg":"trace[414007566] transaction","detail":"{read_only:false; response_revision:1622; number_of_response:1; }","duration":"164.368247ms","start":"2025-11-08T08:32:53.328854Z","end":"2025-11-08T08:32:53.493222Z","steps":["trace[414007566] 'process raft request'  (duration: 164.04597ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:53.526088Z","caller":"traceutil/trace.go:172","msg":"trace[1557354259] transaction","detail":"{read_only:false; response_revision:1623; number_of_response:1; }","duration":"106.131264ms","start":"2025-11-08T08:32:53.418706Z","end":"2025-11-08T08:32:53.524837Z","steps":["trace[1557354259] 'process raft request'  (duration: 104.501134ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:32:59.361065Z","caller":"traceutil/trace.go:172","msg":"trace[92335230] transaction","detail":"{read_only:false; response_revision:1640; number_of_response:1; }","duration":"117.399962ms","start":"2025-11-08T08:32:59.243651Z","end":"2025-11-08T08:32:59.361051Z","steps":["trace[92335230] 'process raft request'  (duration: 116.724099ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:35:06 up 5 min,  0 users,  load average: 0.35, 1.17, 0.65
	Linux addons-982714 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [dca7ac470524d0ba7a5f9a8b6d7d21648fda68206fc112cb7e8d8bd0be1757b5] <==
	E1108 08:31:13.765074       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.176.231:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.176.231:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.176.231:443: connect: connection refused" logger="UnhandledError"
	E1108 08:31:13.767736       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.176.231:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.176.231:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.176.231:443: connect: connection refused" logger="UnhandledError"
	I1108 08:31:13.876586       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 08:32:11.484277       1 conn.go:339] Error on socket receive: read tcp 192.168.39.224:8443->192.168.39.1:50992: use of closed network connection
	E1108 08:32:11.676750       1 conn.go:339] Error on socket receive: read tcp 192.168.39.224:8443->192.168.39.1:51026: use of closed network connection
	I1108 08:32:20.901623       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.10.30"}
	I1108 08:32:38.484974       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1108 08:32:38.677312       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.86.240"}
	E1108 08:32:57.303596       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1108 08:33:00.090764       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1108 08:33:14.794635       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1108 08:33:29.847227       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 08:33:29.847399       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 08:33:29.891969       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 08:33:29.892132       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 08:33:29.901065       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 08:33:29.901111       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 08:33:29.971201       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 08:33:29.971485       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1108 08:33:30.060552       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1108 08:33:30.060637       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1108 08:33:30.896168       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1108 08:33:31.063894       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1108 08:33:31.080525       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1108 08:35:04.526604       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.77.164"}
	
	
	==> kube-controller-manager [e7fec1ef4b2121724cf5c6f1a9fe95bfddb3f1832c04930cabf4d817b047d3f3] <==
	E1108 08:33:39.747263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:33:39.989866       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:33:39.990744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:33:49.766047       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:33:49.767118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:33:49.788485       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:33:49.789332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:33:50.433083       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:33:50.434452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1108 08:33:52.571310       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 08:33:52.571345       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:33:52.595884       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 08:33:52.595928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 08:34:08.503052       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:34:08.503987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:34:10.877355       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:34:10.878485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:34:14.583172       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:34:14.584353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:34:40.311657       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:34:40.312734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:34:47.877790       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:34:47.878951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1108 08:34:58.029888       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1108 08:34:58.031100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a1793c995538db844562758b54777896733bb735cf41997c6aa1e9bced26f79a] <==
	I1108 08:30:25.754383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 08:30:25.855888       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 08:30:25.855935       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.224"]
	E1108 08:30:25.856004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 08:30:26.005942       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1108 08:30:26.006079       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 08:30:26.006111       1 server_linux.go:132] "Using iptables Proxier"
	I1108 08:30:26.043282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 08:30:26.043571       1 server.go:527] "Version info" version="v1.34.1"
	I1108 08:30:26.043601       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 08:30:26.112271       1 config.go:200] "Starting service config controller"
	I1108 08:30:26.112290       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 08:30:26.114005       1 config.go:106] "Starting endpoint slice config controller"
	I1108 08:30:26.121007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 08:30:26.114187       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 08:30:26.124047       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 08:30:26.121554       1 config.go:309] "Starting node config controller"
	I1108 08:30:26.124069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 08:30:26.124074       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 08:30:26.216276       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 08:30:26.224686       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 08:30:26.224730       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [212abbfc8a025e4c4d3e3c5a80a44a8396f2b47e6380099c6f940ca523bdd444] <==
	E1108 08:30:14.475337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 08:30:14.475385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 08:30:14.475486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 08:30:14.475668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 08:30:14.475094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 08:30:14.478125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:30:14.478846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:30:14.478936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 08:30:14.479023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 08:30:14.479054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:30:15.413257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 08:30:15.416930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 08:30:15.437637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 08:30:15.494911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 08:30:15.500462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 08:30:15.569234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 08:30:15.607439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 08:30:15.611935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 08:30:15.740733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:30:15.833256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 08:30:15.840339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 08:30:15.847586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:30:15.939702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 08:30:15.945005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1108 08:30:17.853592       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 08:33:33 addons-982714 kubelet[1483]: I1108 08:33:33.362722    1483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ee9f71-7f12-4496-a055-3cd93ab5959b" path="/var/lib/kubelet/pods/13ee9f71-7f12-4496-a055-3cd93ab5959b/volumes"
	Nov 08 08:33:33 addons-982714 kubelet[1483]: I1108 08:33:33.363248    1483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b0e4de-ed9d-4db8-9ce9-875014d6ea1f" path="/var/lib/kubelet/pods/94b0e4de-ed9d-4db8-9ce9-875014d6ea1f/volumes"
	Nov 08 08:33:33 addons-982714 kubelet[1483]: I1108 08:33:33.363974    1483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3400eb0-d48b-437e-91f2-41d5d63cf62e" path="/var/lib/kubelet/pods/c3400eb0-d48b-437e-91f2-41d5d63cf62e/volumes"
	Nov 08 08:33:37 addons-982714 kubelet[1483]: E1108 08:33:37.592570    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590817592082423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:33:37 addons-982714 kubelet[1483]: E1108 08:33:37.592592    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590817592082423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:33:47 addons-982714 kubelet[1483]: E1108 08:33:47.599928    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590827598428716  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:33:47 addons-982714 kubelet[1483]: E1108 08:33:47.599956    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590827598428716  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:33:57 addons-982714 kubelet[1483]: E1108 08:33:57.604981    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590837604701634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:33:57 addons-982714 kubelet[1483]: E1108 08:33:57.604999    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590837604701634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:07 addons-982714 kubelet[1483]: E1108 08:34:07.608027    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590847607270921  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:07 addons-982714 kubelet[1483]: E1108 08:34:07.608067    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590847607270921  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:15 addons-982714 kubelet[1483]: I1108 08:34:15.358750    1483 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:34:17 addons-982714 kubelet[1483]: E1108 08:34:17.611275    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590857610360226  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:17 addons-982714 kubelet[1483]: E1108 08:34:17.611314    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590857610360226  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:27 addons-982714 kubelet[1483]: I1108 08:34:27.359229    1483 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-7xmwf" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:34:27 addons-982714 kubelet[1483]: E1108 08:34:27.614948    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590867613666486  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:27 addons-982714 kubelet[1483]: E1108 08:34:27.615045    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590867613666486  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:37 addons-982714 kubelet[1483]: E1108 08:34:37.618651    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590877617518286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:37 addons-982714 kubelet[1483]: E1108 08:34:37.618698    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590877617518286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:47 addons-982714 kubelet[1483]: E1108 08:34:47.624063    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590887623591796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:47 addons-982714 kubelet[1483]: E1108 08:34:47.624090    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590887623591796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:52 addons-982714 kubelet[1483]: I1108 08:34:52.358985    1483 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9n6dq" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:34:57 addons-982714 kubelet[1483]: E1108 08:34:57.625891    1483 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762590897625605861  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:34:57 addons-982714 kubelet[1483]: E1108 08:34:57.625913    1483 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762590897625605861  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 08 08:35:04 addons-982714 kubelet[1483]: I1108 08:35:04.539268    1483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxvr\" (UniqueName: \"kubernetes.io/projected/b4170603-4f8a-4907-9ed9-0251d0b4689f-kube-api-access-crxvr\") pod \"hello-world-app-5d498dc89-hgnfk\" (UID: \"b4170603-4f8a-4907-9ed9-0251d0b4689f\") " pod="default/hello-world-app-5d498dc89-hgnfk"
	
	
	==> storage-provisioner [caa24655f68058c52f5f786b5b8b5a165b739d74c9c9b9287255cfa9edd23528] <==
	W1108 08:34:42.050357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:44.054654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:44.059886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:46.062739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:46.067179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:48.071252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:48.078740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:50.082325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:50.087236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:52.090438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:52.097489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:54.102788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:54.110712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:56.114246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:56.119324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:58.123564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:34:58.129406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:00.132642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:00.140171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:02.143054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:02.150996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:04.155407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:04.161290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:06.165197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:06.171851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-982714 -n addons-982714
helpers_test.go:269: (dbg) Run:  kubectl --context addons-982714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-hgnfk ingress-nginx-admission-create-szcsf ingress-nginx-admission-patch-vkw9k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-982714 describe pod hello-world-app-5d498dc89-hgnfk ingress-nginx-admission-create-szcsf ingress-nginx-admission-patch-vkw9k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-982714 describe pod hello-world-app-5d498dc89-hgnfk ingress-nginx-admission-create-szcsf ingress-nginx-admission-patch-vkw9k: exit status 1 (76.929969ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-hgnfk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-982714/192.168.39.224
	Start Time:       Sat, 08 Nov 2025 08:35:04 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-crxvr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-crxvr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-hgnfk to addons-982714
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-szcsf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vkw9k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-982714 describe pod hello-world-app-5d498dc89-hgnfk ingress-nginx-admission-create-szcsf ingress-nginx-admission-patch-vkw9k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable ingress-dns --alsologtostderr -v=1: (1.103726614s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable ingress --alsologtostderr -v=1: (7.764015206s)
--- FAIL: TestAddons/parallel/Ingress (157.63s)

                                                
                                    
x
+
TestPreload (165.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-803502 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1108 09:22:00.325072    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-803502 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m33.899129135s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-803502 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-803502 image pull gcr.io/k8s-minikube/busybox: (3.448012674s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-803502
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-803502: (8.596085407s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-803502 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-803502 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (56.865250574s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-803502 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-08 09:24:23.397104906 +0000 UTC m=+3324.820812033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-803502 -n test-preload-803502
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-803502 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-803502 logs -n 25: (1.058536672s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-041614 ssh -n multinode-041614-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ multinode-041614 ssh -n multinode-041614 sudo cat /home/docker/cp-test_multinode-041614-m03_multinode-041614.txt                                          │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ cp      │ multinode-041614 cp multinode-041614-m03:/home/docker/cp-test.txt multinode-041614-m02:/home/docker/cp-test_multinode-041614-m03_multinode-041614-m02.txt │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ multinode-041614 ssh -n multinode-041614-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ multinode-041614 ssh -n multinode-041614-m02 sudo cat /home/docker/cp-test_multinode-041614-m03_multinode-041614-m02.txt                                  │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ node    │ multinode-041614 node stop m03                                                                                                                            │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ node    │ multinode-041614 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ node    │ list -p multinode-041614                                                                                                                                  │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ stop    │ -p multinode-041614                                                                                                                                       │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:14 UTC │
	│ start   │ -p multinode-041614 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:14 UTC │ 08 Nov 25 09:16 UTC │
	│ node    │ list -p multinode-041614                                                                                                                                  │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ node    │ multinode-041614 node delete m03                                                                                                                          │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ multinode-041614 stop                                                                                                                                     │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:19 UTC │
	│ start   │ -p multinode-041614 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:19 UTC │ 08 Nov 25 09:20 UTC │
	│ node    │ list -p multinode-041614                                                                                                                                  │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:20 UTC │                     │
	│ start   │ -p multinode-041614-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-041614-m02 │ jenkins │ v1.37.0 │ 08 Nov 25 09:20 UTC │                     │
	│ start   │ -p multinode-041614-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-041614-m03 │ jenkins │ v1.37.0 │ 08 Nov 25 09:20 UTC │ 08 Nov 25 09:21 UTC │
	│ node    │ add -p multinode-041614                                                                                                                                   │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:21 UTC │                     │
	│ delete  │ -p multinode-041614-m03                                                                                                                                   │ multinode-041614-m03 │ jenkins │ v1.37.0 │ 08 Nov 25 09:21 UTC │ 08 Nov 25 09:21 UTC │
	│ delete  │ -p multinode-041614                                                                                                                                       │ multinode-041614     │ jenkins │ v1.37.0 │ 08 Nov 25 09:21 UTC │ 08 Nov 25 09:21 UTC │
	│ start   │ -p test-preload-803502 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-803502  │ jenkins │ v1.37.0 │ 08 Nov 25 09:21 UTC │ 08 Nov 25 09:23 UTC │
	│ image   │ test-preload-803502 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-803502  │ jenkins │ v1.37.0 │ 08 Nov 25 09:23 UTC │ 08 Nov 25 09:23 UTC │
	│ stop    │ -p test-preload-803502                                                                                                                                    │ test-preload-803502  │ jenkins │ v1.37.0 │ 08 Nov 25 09:23 UTC │ 08 Nov 25 09:23 UTC │
	│ start   │ -p test-preload-803502 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-803502  │ jenkins │ v1.37.0 │ 08 Nov 25 09:23 UTC │ 08 Nov 25 09:24 UTC │
	│ image   │ test-preload-803502 image list                                                                                                                            │ test-preload-803502  │ jenkins │ v1.37.0 │ 08 Nov 25 09:24 UTC │ 08 Nov 25 09:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:23:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:23:26.391793   33232 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:23:26.391894   33232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:23:26.391902   33232 out.go:374] Setting ErrFile to fd 2...
	I1108 09:23:26.391906   33232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:23:26.392080   33232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:23:26.392481   33232 out.go:368] Setting JSON to false
	I1108 09:23:26.393261   33232 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3947,"bootTime":1762589859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:23:26.393327   33232 start.go:143] virtualization: kvm guest
	I1108 09:23:26.395266   33232 out.go:179] * [test-preload-803502] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:23:26.396580   33232 notify.go:221] Checking for updates...
	I1108 09:23:26.396586   33232 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:23:26.398115   33232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:23:26.399154   33232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:23:26.400224   33232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:23:26.401328   33232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:23:26.402236   33232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:23:26.403585   33232 config.go:182] Loaded profile config "test-preload-803502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1108 09:23:26.404924   33232 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1108 09:23:26.405795   33232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:23:26.437982   33232 out.go:179] * Using the kvm2 driver based on existing profile
	I1108 09:23:26.439021   33232 start.go:309] selected driver: kvm2
	I1108 09:23:26.439033   33232 start.go:930] validating driver "kvm2" against &{Name:test-preload-803502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-803502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:23:26.439139   33232 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:23:26.440011   33232 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:23:26.440042   33232 cni.go:84] Creating CNI manager for ""
	I1108 09:23:26.440126   33232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:23:26.440185   33232 start.go:353] cluster config:
	{Name:test-preload-803502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-803502 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:23:26.440293   33232 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:23:26.442081   33232 out.go:179] * Starting "test-preload-803502" primary control-plane node in "test-preload-803502" cluster
	I1108 09:23:26.443055   33232 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1108 09:23:26.552477   33232 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1108 09:23:26.552532   33232 cache.go:59] Caching tarball of preloaded images
	I1108 09:23:26.552699   33232 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1108 09:23:26.555051   33232 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1108 09:23:26.556105   33232 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1108 09:23:26.670595   33232 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1108 09:23:26.670642   33232 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1108 09:23:37.963244   33232 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1108 09:23:37.963406   33232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/config.json ...
	I1108 09:23:37.963696   33232 start.go:360] acquireMachinesLock for test-preload-803502: {Name:mk17d57b1ca3eb78588f74785db7bcd997a10966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 09:23:37.963782   33232 start.go:364] duration metric: took 59.723µs to acquireMachinesLock for "test-preload-803502"
	I1108 09:23:37.963803   33232 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:23:37.963808   33232 fix.go:54] fixHost starting: 
	I1108 09:23:37.965880   33232 fix.go:112] recreateIfNeeded on test-preload-803502: state=Stopped err=<nil>
	W1108 09:23:37.965904   33232 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:23:37.967554   33232 out.go:252] * Restarting existing kvm2 VM for "test-preload-803502" ...
	I1108 09:23:37.967616   33232 main.go:143] libmachine: starting domain...
	I1108 09:23:37.967633   33232 main.go:143] libmachine: ensuring networks are active...
	I1108 09:23:37.968408   33232 main.go:143] libmachine: Ensuring network default is active
	I1108 09:23:37.968812   33232 main.go:143] libmachine: Ensuring network mk-test-preload-803502 is active
	I1108 09:23:37.969312   33232 main.go:143] libmachine: getting domain XML...
	I1108 09:23:37.970242   33232 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-803502</name>
	  <uuid>4106e967-a8b4-4b4e-9046-028947cfb1b3</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/test-preload-803502.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6d:08:67'/>
	      <source network='mk-test-preload-803502'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0c:3b:d1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1108 09:23:39.214246   33232 main.go:143] libmachine: waiting for domain to start...
	I1108 09:23:39.215612   33232 main.go:143] libmachine: domain is now running
	I1108 09:23:39.215629   33232 main.go:143] libmachine: waiting for IP...
	I1108 09:23:39.216471   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:39.217047   33232 main.go:143] libmachine: domain test-preload-803502 has current primary IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:39.217060   33232 main.go:143] libmachine: found domain IP: 192.168.39.52
	I1108 09:23:39.217064   33232 main.go:143] libmachine: reserving static IP address...
	I1108 09:23:39.217463   33232 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-803502", mac: "52:54:00:6d:08:67", ip: "192.168.39.52"} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:21:56 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:39.217483   33232 main.go:143] libmachine: skip adding static IP to network mk-test-preload-803502 - found existing host DHCP lease matching {name: "test-preload-803502", mac: "52:54:00:6d:08:67", ip: "192.168.39.52"}
	I1108 09:23:39.217491   33232 main.go:143] libmachine: reserved static IP address 192.168.39.52 for domain test-preload-803502
	I1108 09:23:39.217505   33232 main.go:143] libmachine: waiting for SSH...
	I1108 09:23:39.217513   33232 main.go:143] libmachine: Getting to WaitForSSH function...
	I1108 09:23:39.219895   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:39.220336   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:21:56 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:39.220361   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:39.220520   33232 main.go:143] libmachine: Using SSH client type: native
	I1108 09:23:39.220778   33232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1108 09:23:39.220790   33232 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1108 09:23:42.316756   33232 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.52:22: connect: no route to host
	I1108 09:23:48.396685   33232 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.52:22: connect: no route to host
	I1108 09:23:51.397389   33232 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.52:22: connect: connection refused
	I1108 09:23:54.504433   33232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:23:54.507908   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.508367   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:54.508389   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.508619   33232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/config.json ...
	I1108 09:23:54.508841   33232 machine.go:94] provisionDockerMachine start ...
	I1108 09:23:54.511119   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.511418   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:54.511437   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.511610   33232 main.go:143] libmachine: Using SSH client type: native
	I1108 09:23:54.511846   33232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1108 09:23:54.511858   33232 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:23:54.615435   33232 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1108 09:23:54.615469   33232 buildroot.go:166] provisioning hostname "test-preload-803502"
	I1108 09:23:54.618345   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.618755   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:54.618788   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.618998   33232 main.go:143] libmachine: Using SSH client type: native
	I1108 09:23:54.619238   33232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1108 09:23:54.619250   33232 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-803502 && echo "test-preload-803502" | sudo tee /etc/hostname
	I1108 09:23:54.739362   33232 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-803502
	
	I1108 09:23:54.742311   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.742700   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:54.742730   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.742899   33232 main.go:143] libmachine: Using SSH client type: native
	I1108 09:23:54.743109   33232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1108 09:23:54.743132   33232 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-803502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-803502/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-803502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:23:54.856878   33232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:23:54.856941   33232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5845/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5845/.minikube}
	I1108 09:23:54.856991   33232 buildroot.go:174] setting up certificates
	I1108 09:23:54.857005   33232 provision.go:84] configureAuth start
	I1108 09:23:54.859840   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.860236   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:54.860277   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.862443   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.862773   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:54.862792   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:54.862887   33232 provision.go:143] copyHostCerts
	I1108 09:23:54.862930   33232 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem, removing ...
	I1108 09:23:54.862946   33232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem
	I1108 09:23:54.863015   33232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem (1082 bytes)
	I1108 09:23:54.863102   33232 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem, removing ...
	I1108 09:23:54.863109   33232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem
	I1108 09:23:54.863135   33232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem (1123 bytes)
	I1108 09:23:54.863193   33232 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem, removing ...
	I1108 09:23:54.863200   33232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem
	I1108 09:23:54.863222   33232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem (1675 bytes)
	I1108 09:23:54.863277   33232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem org=jenkins.test-preload-803502 san=[127.0.0.1 192.168.39.52 localhost minikube test-preload-803502]
	I1108 09:23:55.429228   33232 provision.go:177] copyRemoteCerts
	I1108 09:23:55.429290   33232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:23:55.431945   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:55.432356   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:55.432382   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:55.432518   33232 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/id_rsa Username:docker}
	I1108 09:23:55.529023   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:23:55.559853   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1108 09:23:55.588940   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:23:55.617793   33232 provision.go:87] duration metric: took 760.775779ms to configureAuth
	I1108 09:23:55.617820   33232 buildroot.go:189] setting minikube options for container-runtime
	I1108 09:23:55.618022   33232 config.go:182] Loaded profile config "test-preload-803502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1108 09:23:55.620871   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:55.621266   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:55.621291   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:55.621472   33232 main.go:143] libmachine: Using SSH client type: native
	I1108 09:23:55.621703   33232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1108 09:23:55.621720   33232 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:23:55.865370   33232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:23:55.865410   33232 machine.go:97] duration metric: took 1.356555917s to provisionDockerMachine
	I1108 09:23:55.865420   33232 start.go:293] postStartSetup for "test-preload-803502" (driver="kvm2")
	I1108 09:23:55.865429   33232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:23:55.865478   33232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:23:55.868391   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:55.868744   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:55.868763   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:55.868893   33232 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/id_rsa Username:docker}
	I1108 09:23:55.952731   33232 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:23:55.960013   33232 info.go:137] Remote host: Buildroot 2025.02
	I1108 09:23:55.960046   33232 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/addons for local assets ...
	I1108 09:23:55.960130   33232 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/files for local assets ...
	I1108 09:23:55.960235   33232 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem -> 97452.pem in /etc/ssl/certs
	I1108 09:23:55.960387   33232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:23:55.972564   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem --> /etc/ssl/certs/97452.pem (1708 bytes)
	I1108 09:23:56.001433   33232 start.go:296] duration metric: took 136.000265ms for postStartSetup
	I1108 09:23:56.001468   33232 fix.go:56] duration metric: took 18.037660151s for fixHost
	I1108 09:23:56.004232   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.004656   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:56.004680   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.004862   33232 main.go:143] libmachine: Using SSH client type: native
	I1108 09:23:56.005080   33232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1108 09:23:56.005093   33232 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1108 09:23:56.110277   33232 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762593836.077604229
	
	I1108 09:23:56.110305   33232 fix.go:216] guest clock: 1762593836.077604229
	I1108 09:23:56.110314   33232 fix.go:229] Guest: 2025-11-08 09:23:56.077604229 +0000 UTC Remote: 2025-11-08 09:23:56.001472021 +0000 UTC m=+29.654865261 (delta=76.132208ms)
	I1108 09:23:56.110352   33232 fix.go:200] guest clock delta is within tolerance: 76.132208ms
	I1108 09:23:56.110367   33232 start.go:83] releasing machines lock for "test-preload-803502", held for 18.14656993s
	I1108 09:23:56.113126   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.113528   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:56.113551   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.114041   33232 ssh_runner.go:195] Run: cat /version.json
	I1108 09:23:56.114124   33232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:23:56.116835   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.117055   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.117239   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:56.117271   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.117411   33232 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/id_rsa Username:docker}
	I1108 09:23:56.117574   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:56.117596   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:56.117759   33232 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/id_rsa Username:docker}
	I1108 09:23:56.194191   33232 ssh_runner.go:195] Run: systemctl --version
	I1108 09:23:56.219549   33232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:23:56.367235   33232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:23:56.374076   33232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:23:56.374148   33232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:23:56.394572   33232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:23:56.394594   33232 start.go:496] detecting cgroup driver to use...
	I1108 09:23:56.394656   33232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:23:56.414313   33232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:23:56.431991   33232 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:23:56.432051   33232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:23:56.449630   33232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:23:56.466137   33232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:23:56.615059   33232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:23:56.838894   33232 docker.go:234] disabling docker service ...
	I1108 09:23:56.838971   33232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:23:56.856163   33232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:23:56.871354   33232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:23:57.021751   33232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:23:57.168984   33232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:23:57.184518   33232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:23:57.206662   33232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1108 09:23:57.206738   33232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.218635   33232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:23:57.218711   33232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.230947   33232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.242964   33232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.255431   33232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:23:57.268138   33232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.280082   33232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.300680   33232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:23:57.312594   33232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:23:57.322744   33232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 09:23:57.322798   33232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 09:23:57.343679   33232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:23:57.355271   33232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:23:57.505036   33232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:23:57.618396   33232 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:23:57.618467   33232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:23:57.623877   33232 start.go:564] Will wait 60s for crictl version
	I1108 09:23:57.623931   33232 ssh_runner.go:195] Run: which crictl
	I1108 09:23:57.627816   33232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 09:23:57.670027   33232 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1108 09:23:57.670099   33232 ssh_runner.go:195] Run: crio --version
	I1108 09:23:57.702963   33232 ssh_runner.go:195] Run: crio --version
	I1108 09:23:57.733832   33232 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1108 09:23:57.737291   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:57.737642   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:23:57.737667   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:23:57.737857   33232 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 09:23:57.742825   33232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:23:57.757627   33232 kubeadm.go:884] updating cluster {Name:test-preload-803502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-803502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:23:57.757761   33232 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1108 09:23:57.757815   33232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:23:57.797023   33232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1108 09:23:57.797105   33232 ssh_runner.go:195] Run: which lz4
	I1108 09:23:57.801671   33232 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1108 09:23:57.806708   33232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 09:23:57.806739   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1108 09:23:59.287774   33232 crio.go:462] duration metric: took 1.486142102s to copy over tarball
	I1108 09:23:59.287849   33232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 09:24:00.943069   33232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.655191612s)
	I1108 09:24:00.943096   33232 crio.go:469] duration metric: took 1.655295547s to extract the tarball
	I1108 09:24:00.943103   33232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 09:24:00.984291   33232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:24:01.029881   33232 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:24:01.029913   33232 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:24:01.029921   33232 kubeadm.go:935] updating node { 192.168.39.52 8443 v1.32.0 crio true true} ...
	I1108 09:24:01.030013   33232 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-803502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-803502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:24:01.030075   33232 ssh_runner.go:195] Run: crio config
	I1108 09:24:01.078456   33232 cni.go:84] Creating CNI manager for ""
	I1108 09:24:01.078483   33232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:24:01.078515   33232 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:24:01.078542   33232 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-803502 NodeName:test-preload-803502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:24:01.078677   33232 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-803502"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.52"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:24:01.078750   33232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1108 09:24:01.091992   33232 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:24:01.092061   33232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:24:01.104959   33232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1108 09:24:01.126979   33232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:24:01.148676   33232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1108 09:24:01.171567   33232 ssh_runner.go:195] Run: grep 192.168.39.52	control-plane.minikube.internal$ /etc/hosts
	I1108 09:24:01.176100   33232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:24:01.192878   33232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:24:01.340357   33232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:24:01.362631   33232 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502 for IP: 192.168.39.52
	I1108 09:24:01.362663   33232 certs.go:195] generating shared ca certs ...
	I1108 09:24:01.362688   33232 certs.go:227] acquiring lock for ca certs: {Name:mkf9b4566d45fc9bb33b533126e27cef8349b756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:24:01.362874   33232 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key
	I1108 09:24:01.362988   33232 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key
	I1108 09:24:01.363017   33232 certs.go:257] generating profile certs ...
	I1108 09:24:01.363136   33232 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.key
	I1108 09:24:01.363224   33232 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/apiserver.key.7fdf28a4
	I1108 09:24:01.363281   33232 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/proxy-client.key
	I1108 09:24:01.363440   33232 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/9745.pem (1338 bytes)
	W1108 09:24:01.363485   33232 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5845/.minikube/certs/9745_empty.pem, impossibly tiny 0 bytes
	I1108 09:24:01.363512   33232 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:24:01.363550   33232 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:24:01.363583   33232 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:24:01.363616   33232 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem (1675 bytes)
	I1108 09:24:01.363675   33232 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem (1708 bytes)
	I1108 09:24:01.364322   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:24:01.406022   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:24:01.440656   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:24:01.472681   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:24:01.504701   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 09:24:01.538013   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:24:01.570727   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:24:01.604020   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:24:01.636330   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem --> /usr/share/ca-certificates/97452.pem (1708 bytes)
	I1108 09:24:01.668990   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:24:01.701638   33232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/9745.pem --> /usr/share/ca-certificates/9745.pem (1338 bytes)
	I1108 09:24:01.733528   33232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:24:01.756018   33232 ssh_runner.go:195] Run: openssl version
	I1108 09:24:01.763006   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97452.pem && ln -fs /usr/share/ca-certificates/97452.pem /etc/ssl/certs/97452.pem"
	I1108 09:24:01.777691   33232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97452.pem
	I1108 09:24:01.783837   33232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:38 /usr/share/ca-certificates/97452.pem
	I1108 09:24:01.783915   33232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97452.pem
	I1108 09:24:01.791766   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/97452.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:24:01.806202   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:24:01.820521   33232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:24:01.826367   33232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:24:01.826430   33232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:24:01.834649   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:24:01.849148   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9745.pem && ln -fs /usr/share/ca-certificates/9745.pem /etc/ssl/certs/9745.pem"
	I1108 09:24:01.865252   33232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9745.pem
	I1108 09:24:01.871191   33232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:38 /usr/share/ca-certificates/9745.pem
	I1108 09:24:01.871274   33232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9745.pem
	I1108 09:24:01.879314   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9745.pem /etc/ssl/certs/51391683.0"
	I1108 09:24:01.894073   33232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:24:01.900757   33232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:24:01.908924   33232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:24:01.916985   33232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:24:01.925826   33232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:24:01.934120   33232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:24:01.942585   33232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:24:01.950928   33232 kubeadm.go:401] StartCluster: {Name:test-preload-803502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-803502 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:24:01.951014   33232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:24:01.951104   33232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:24:01.996153   33232 cri.go:89] found id: ""
	I1108 09:24:01.996230   33232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:24:02.010019   33232 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:24:02.010046   33232 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:24:02.010101   33232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:24:02.024227   33232 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:24:02.024684   33232 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-803502" does not appear in /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:24:02.024810   33232 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-5845/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-803502" cluster setting kubeconfig missing "test-preload-803502" context setting]
	I1108 09:24:02.025081   33232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:24:02.050787   33232 kapi.go:59] client config for test-preload-803502: &rest.Config{Host:"https://192.168.39.52:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:24:02.051288   33232 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 09:24:02.051308   33232 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 09:24:02.051314   33232 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 09:24:02.051320   33232 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 09:24:02.051325   33232 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 09:24:02.051639   33232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:24:02.070664   33232 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.52
	I1108 09:24:02.070702   33232 kubeadm.go:1161] stopping kube-system containers ...
	I1108 09:24:02.070713   33232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 09:24:02.070768   33232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:24:02.131630   33232 cri.go:89] found id: ""
	I1108 09:24:02.131711   33232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 09:24:02.155524   33232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:24:02.168547   33232 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:24:02.168568   33232 kubeadm.go:158] found existing configuration files:
	
	I1108 09:24:02.168616   33232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:24:02.180187   33232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:24:02.180247   33232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:24:02.192547   33232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:24:02.204263   33232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:24:02.204337   33232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:24:02.216700   33232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:24:02.228630   33232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:24:02.228689   33232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:24:02.241666   33232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:24:02.253286   33232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:24:02.253347   33232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:24:02.266047   33232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:24:02.280055   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:24:02.347158   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:24:03.555141   33232 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.207925383s)
	I1108 09:24:03.555247   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:24:03.813075   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:24:03.882587   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:24:03.983475   33232 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:24:03.983595   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:04.484401   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:04.984421   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:05.483711   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:05.984532   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:06.484037   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:06.518078   33232 api_server.go:72] duration metric: took 2.534614784s to wait for apiserver process to appear ...
	I1108 09:24:06.518110   33232 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:24:06.518143   33232 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I1108 09:24:08.693868   33232 api_server.go:279] https://192.168.39.52:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:24:08.693906   33232 api_server.go:103] status: https://192.168.39.52:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:24:08.693926   33232 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I1108 09:24:08.711838   33232 api_server.go:279] https://192.168.39.52:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:24:08.711862   33232 api_server.go:103] status: https://192.168.39.52:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:24:09.018314   33232 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I1108 09:24:09.026722   33232 api_server.go:279] https://192.168.39.52:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:24:09.026744   33232 api_server.go:103] status: https://192.168.39.52:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:24:09.518344   33232 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I1108 09:24:09.522956   33232 api_server.go:279] https://192.168.39.52:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:24:09.522990   33232 api_server.go:103] status: https://192.168.39.52:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:24:10.018607   33232 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I1108 09:24:10.023603   33232 api_server.go:279] https://192.168.39.52:8443/healthz returned 200:
	ok
	I1108 09:24:10.029994   33232 api_server.go:141] control plane version: v1.32.0
	I1108 09:24:10.030019   33232 api_server.go:131] duration metric: took 3.511902621s to wait for apiserver health ...
	I1108 09:24:10.030027   33232 cni.go:84] Creating CNI manager for ""
	I1108 09:24:10.030033   33232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:24:10.031768   33232 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 09:24:10.032992   33232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 09:24:10.046928   33232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1108 09:24:10.071197   33232 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:24:10.077437   33232 system_pods.go:59] 7 kube-system pods found
	I1108 09:24:10.077473   33232 system_pods.go:61] "coredns-668d6bf9bc-jkd9t" [d7d10736-5133-4cd1-82e6-581c5a7536c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:24:10.077483   33232 system_pods.go:61] "etcd-test-preload-803502" [68b974ba-2577-4923-b954-994048fbf725] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:24:10.077509   33232 system_pods.go:61] "kube-apiserver-test-preload-803502" [cadf78df-5495-4732-8c6d-3008841c0a4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:24:10.077518   33232 system_pods.go:61] "kube-controller-manager-test-preload-803502" [2bf17fbf-9e07-4169-9544-1f88200cb599] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:24:10.077526   33232 system_pods.go:61] "kube-proxy-sc6kw" [c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:24:10.077534   33232 system_pods.go:61] "kube-scheduler-test-preload-803502" [a6c7699b-8607-44de-9b6b-42347102a718] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:24:10.077541   33232 system_pods.go:61] "storage-provisioner" [61130744-2885-45df-a3bf-8dcb5e61a2e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:24:10.077552   33232 system_pods.go:74] duration metric: took 6.331161ms to wait for pod list to return data ...
	I1108 09:24:10.077558   33232 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:24:10.081757   33232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 09:24:10.081787   33232 node_conditions.go:123] node cpu capacity is 2
	I1108 09:24:10.081798   33232 node_conditions.go:105] duration metric: took 4.234485ms to run NodePressure ...
	I1108 09:24:10.081852   33232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:24:10.371834   33232 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1108 09:24:10.375880   33232 kubeadm.go:744] kubelet initialised
	I1108 09:24:10.375900   33232 kubeadm.go:745] duration metric: took 4.041238ms waiting for restarted kubelet to initialise ...
	I1108 09:24:10.375914   33232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:24:10.393117   33232 ops.go:34] apiserver oom_adj: -16
	I1108 09:24:10.393138   33232 kubeadm.go:602] duration metric: took 8.38308535s to restartPrimaryControlPlane
	I1108 09:24:10.393149   33232 kubeadm.go:403] duration metric: took 8.442229955s to StartCluster
	I1108 09:24:10.393169   33232 settings.go:142] acquiring lock: {Name:mk0d0617389eeb9d724259ab95a170c08eef0474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:24:10.393245   33232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:24:10.393781   33232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:24:10.394031   33232 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:24:10.394113   33232 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:24:10.394199   33232 addons.go:70] Setting storage-provisioner=true in profile "test-preload-803502"
	I1108 09:24:10.394224   33232 addons.go:239] Setting addon storage-provisioner=true in "test-preload-803502"
	I1108 09:24:10.394228   33232 config.go:182] Loaded profile config "test-preload-803502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1108 09:24:10.394236   33232 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:24:10.394243   33232 addons.go:70] Setting default-storageclass=true in profile "test-preload-803502"
	I1108 09:24:10.394266   33232 host.go:66] Checking if "test-preload-803502" exists ...
	I1108 09:24:10.394270   33232 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-803502"
	I1108 09:24:10.395562   33232 out.go:179] * Verifying Kubernetes components...
	I1108 09:24:10.396627   33232 kapi.go:59] client config for test-preload-803502: &rest.Config{Host:"https://192.168.39.52:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:24:10.396725   33232 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:24:10.396772   33232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:24:10.396895   33232 addons.go:239] Setting addon default-storageclass=true in "test-preload-803502"
	W1108 09:24:10.396911   33232 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:24:10.396927   33232 host.go:66] Checking if "test-preload-803502" exists ...
	I1108 09:24:10.397781   33232 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:24:10.397796   33232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:24:10.398431   33232 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:24:10.398449   33232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:24:10.400645   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:24:10.400950   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:24:10.400971   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:24:10.401102   33232 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/id_rsa Username:docker}
	I1108 09:24:10.401342   33232 main.go:143] libmachine: domain test-preload-803502 has defined MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:24:10.401815   33232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:08:67", ip: ""} in network mk-test-preload-803502: {Iface:virbr1 ExpiryTime:2025-11-08 10:23:50 +0000 UTC Type:0 Mac:52:54:00:6d:08:67 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:test-preload-803502 Clientid:01:52:54:00:6d:08:67}
	I1108 09:24:10.401851   33232 main.go:143] libmachine: domain test-preload-803502 has defined IP address 192.168.39.52 and MAC address 52:54:00:6d:08:67 in network mk-test-preload-803502
	I1108 09:24:10.402024   33232 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/test-preload-803502/id_rsa Username:docker}
	I1108 09:24:10.592671   33232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:24:10.614751   33232 node_ready.go:35] waiting up to 6m0s for node "test-preload-803502" to be "Ready" ...
	I1108 09:24:10.618288   33232 node_ready.go:49] node "test-preload-803502" is "Ready"
	I1108 09:24:10.618319   33232 node_ready.go:38] duration metric: took 3.527888ms for node "test-preload-803502" to be "Ready" ...
	I1108 09:24:10.618333   33232 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:24:10.618378   33232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:24:10.638875   33232 api_server.go:72] duration metric: took 244.808248ms to wait for apiserver process to appear ...
	I1108 09:24:10.638900   33232 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:24:10.638920   33232 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I1108 09:24:10.643131   33232 api_server.go:279] https://192.168.39.52:8443/healthz returned 200:
	ok
	I1108 09:24:10.644113   33232 api_server.go:141] control plane version: v1.32.0
	I1108 09:24:10.644130   33232 api_server.go:131] duration metric: took 5.224194ms to wait for apiserver health ...
	I1108 09:24:10.644138   33232 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:24:10.649572   33232 system_pods.go:59] 7 kube-system pods found
	I1108 09:24:10.649599   33232 system_pods.go:61] "coredns-668d6bf9bc-jkd9t" [d7d10736-5133-4cd1-82e6-581c5a7536c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:24:10.649606   33232 system_pods.go:61] "etcd-test-preload-803502" [68b974ba-2577-4923-b954-994048fbf725] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:24:10.649614   33232 system_pods.go:61] "kube-apiserver-test-preload-803502" [cadf78df-5495-4732-8c6d-3008841c0a4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:24:10.649619   33232 system_pods.go:61] "kube-controller-manager-test-preload-803502" [2bf17fbf-9e07-4169-9544-1f88200cb599] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:24:10.649623   33232 system_pods.go:61] "kube-proxy-sc6kw" [c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9] Running
	I1108 09:24:10.649628   33232 system_pods.go:61] "kube-scheduler-test-preload-803502" [a6c7699b-8607-44de-9b6b-42347102a718] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:24:10.649631   33232 system_pods.go:61] "storage-provisioner" [61130744-2885-45df-a3bf-8dcb5e61a2e7] Running
	I1108 09:24:10.649637   33232 system_pods.go:74] duration metric: took 5.494645ms to wait for pod list to return data ...
	I1108 09:24:10.649647   33232 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:24:10.653985   33232 default_sa.go:45] found service account: "default"
	I1108 09:24:10.654003   33232 default_sa.go:55] duration metric: took 4.352107ms for default service account to be created ...
	I1108 09:24:10.654011   33232 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:24:10.657125   33232 system_pods.go:86] 7 kube-system pods found
	I1108 09:24:10.657160   33232 system_pods.go:89] "coredns-668d6bf9bc-jkd9t" [d7d10736-5133-4cd1-82e6-581c5a7536c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:24:10.657171   33232 system_pods.go:89] "etcd-test-preload-803502" [68b974ba-2577-4923-b954-994048fbf725] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:24:10.657183   33232 system_pods.go:89] "kube-apiserver-test-preload-803502" [cadf78df-5495-4732-8c6d-3008841c0a4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:24:10.657198   33232 system_pods.go:89] "kube-controller-manager-test-preload-803502" [2bf17fbf-9e07-4169-9544-1f88200cb599] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:24:10.657204   33232 system_pods.go:89] "kube-proxy-sc6kw" [c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9] Running
	I1108 09:24:10.657233   33232 system_pods.go:89] "kube-scheduler-test-preload-803502" [a6c7699b-8607-44de-9b6b-42347102a718] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:24:10.657246   33232 system_pods.go:89] "storage-provisioner" [61130744-2885-45df-a3bf-8dcb5e61a2e7] Running
	I1108 09:24:10.657256   33232 system_pods.go:126] duration metric: took 3.23905ms to wait for k8s-apps to be running ...
	I1108 09:24:10.657265   33232 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:24:10.657325   33232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:24:10.669580   33232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:24:10.680560   33232 system_svc.go:56] duration metric: took 23.286159ms WaitForService to wait for kubelet
	I1108 09:24:10.680587   33232 kubeadm.go:587] duration metric: took 286.525063ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:24:10.680608   33232 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:24:10.688182   33232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 09:24:10.688206   33232 node_conditions.go:123] node cpu capacity is 2
	I1108 09:24:10.688215   33232 node_conditions.go:105] duration metric: took 7.602354ms to run NodePressure ...
	I1108 09:24:10.688227   33232 start.go:242] waiting for startup goroutines ...
	I1108 09:24:10.738900   33232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:24:11.427918   33232 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:24:11.429236   33232 addons.go:515] duration metric: took 1.035114623s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:24:11.429276   33232 start.go:247] waiting for cluster config update ...
	I1108 09:24:11.429289   33232 start.go:256] writing updated cluster config ...
	I1108 09:24:11.429653   33232 ssh_runner.go:195] Run: rm -f paused
	I1108 09:24:11.435742   33232 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:24:11.436325   33232 kapi.go:59] client config for test-preload-803502: &rest.Config{Host:"https://192.168.39.52:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/test-preload-803502/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:24:11.439118   33232 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-jkd9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:13.445317   33232 pod_ready.go:94] pod "coredns-668d6bf9bc-jkd9t" is "Ready"
	I1108 09:24:13.445350   33232 pod_ready.go:86] duration metric: took 2.006205387s for pod "coredns-668d6bf9bc-jkd9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:13.448123   33232 pod_ready.go:83] waiting for pod "etcd-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:24:15.454307   33232 pod_ready.go:104] pod "etcd-test-preload-803502" is not "Ready", error: <nil>
	W1108 09:24:17.955315   33232 pod_ready.go:104] pod "etcd-test-preload-803502" is not "Ready", error: <nil>
	W1108 09:24:20.453657   33232 pod_ready.go:104] pod "etcd-test-preload-803502" is not "Ready", error: <nil>
	I1108 09:24:21.955385   33232 pod_ready.go:94] pod "etcd-test-preload-803502" is "Ready"
	I1108 09:24:21.955412   33232 pod_ready.go:86] duration metric: took 8.507264509s for pod "etcd-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:21.958200   33232 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:22.469932   33232 pod_ready.go:94] pod "kube-apiserver-test-preload-803502" is "Ready"
	I1108 09:24:22.469966   33232 pod_ready.go:86] duration metric: took 511.739219ms for pod "kube-apiserver-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:22.472305   33232 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:22.476513   33232 pod_ready.go:94] pod "kube-controller-manager-test-preload-803502" is "Ready"
	I1108 09:24:22.476532   33232 pod_ready.go:86] duration metric: took 4.206249ms for pod "kube-controller-manager-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:22.478923   33232 pod_ready.go:83] waiting for pod "kube-proxy-sc6kw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:22.551305   33232 pod_ready.go:94] pod "kube-proxy-sc6kw" is "Ready"
	I1108 09:24:22.551323   33232 pod_ready.go:86] duration metric: took 72.383939ms for pod "kube-proxy-sc6kw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:22.752941   33232 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:23.151553   33232 pod_ready.go:94] pod "kube-scheduler-test-preload-803502" is "Ready"
	I1108 09:24:23.151581   33232 pod_ready.go:86] duration metric: took 398.615654ms for pod "kube-scheduler-test-preload-803502" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:24:23.151593   33232 pod_ready.go:40] duration metric: took 11.71582344s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:24:23.191528   33232 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1108 09:24:23.193220   33232 out.go:203] 
	W1108 09:24:23.194543   33232 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1108 09:24:23.195633   33232 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:24:23.196645   33232 out.go:179] * Done! kubectl is now configured to use "test-preload-803502" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.026337140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca60c672-9d27-4f02-8251-cdffccd6df3c name=/runtime.v1.RuntimeService/Version
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.028012943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff42847a-2fd1-4913-b035-bd389808a5b4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.028411275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593864028391903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff42847a-2fd1-4913-b035-bd389808a5b4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.029550874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a7bba28-973e-4327-a7d3-621ee0e1baf6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.029893239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a7bba28-973e-4327-a7d3-621ee0e1baf6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.030398772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be155cd1c3906f0a3bdf58a6d7000ed19edfc9f1d5fc94a040b0024fec972fe6,PodSandboxId:c5bd81f48f3b4a8d77c28b62cbb77ea7ecf1ac627774779f163edd5ee3d05356,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762593852975511263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jkd9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7d10736-5133-4cd1-82e6-581c5a7536c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b8bcf7752d8d432e676de9fea60b264a050680c90220e9acd0c82d768bb1e,PodSandboxId:18ef29ac3dbcaf18248364f63de277aa8394434c29809655a022b7165f64f702,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762593849339997349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc6kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af732507993a31be92419ffe871b9f6059a962a65a29055deb608d1d99fbcbcc,PodSandboxId:395015af30df14199f5c140d04ed09c8814872e96daef43953241ec8020376b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593849333037346,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
130744-2885-45df-a3bf-8dcb5e61a2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df213a0cec401dcaa2ee4f17c496ccee078307ce29ef0fd713573e1dcf5126d4,PodSandboxId:c8a2cae7393c2eef4bc0ef5e0d87510f0b0ede8f9451b23ddd753d9dba8c541b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762593845773945600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397e6738510954af4976d2bfb3a2dded,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4832bb463f02866da67b49d0595420ffc9da2d9d96717c9024ae7a75f9211b2b,PodSandboxId:8bfa11b70b6563131b8ab6e316a8ecbf837b8ad1c7c77c22800859166d52ae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762593845775317362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8696207690252224986eef7db14dac,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4178d592c59458f8094d38162349a6d7faa1f447c63e6a850ca27e49828e939,PodSandboxId:cfcbb256283a15d833c50a2edd847ae9f2b95f9e11897caa7b8800e2379f8579,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762593845760643802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e26cbb7967e704fd759e12cbadd5cf7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d17d83a970551ba61ff67dabbe607e7f49955db66e2d134f2a0913158589b6c,PodSandboxId:2b2315ae2344818da498666ca94a8ae7dbaadeeb1a8bfcd8886102012d1f75b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762593845753859199,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393aea9ede9c675cf9e535bb7303288,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a7bba28-973e-4327-a7d3-621ee0e1baf6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.072778593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46a7da5a-b0e1-4d39-9c92-0a8c347ec204 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.072896976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46a7da5a-b0e1-4d39-9c92-0a8c347ec204 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.074899774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6655b24c-c368-49cd-87fa-018e8fa1efa8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.075373420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593864075290673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6655b24c-c368-49cd-87fa-018e8fa1efa8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.076083533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bd133e0-db8b-4989-9c7c-424f52aab77b name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.076134959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bd133e0-db8b-4989-9c7c-424f52aab77b name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.076290725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be155cd1c3906f0a3bdf58a6d7000ed19edfc9f1d5fc94a040b0024fec972fe6,PodSandboxId:c5bd81f48f3b4a8d77c28b62cbb77ea7ecf1ac627774779f163edd5ee3d05356,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762593852975511263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jkd9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7d10736-5133-4cd1-82e6-581c5a7536c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b8bcf7752d8d432e676de9fea60b264a050680c90220e9acd0c82d768bb1e,PodSandboxId:18ef29ac3dbcaf18248364f63de277aa8394434c29809655a022b7165f64f702,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762593849339997349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc6kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af732507993a31be92419ffe871b9f6059a962a65a29055deb608d1d99fbcbcc,PodSandboxId:395015af30df14199f5c140d04ed09c8814872e96daef43953241ec8020376b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593849333037346,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
130744-2885-45df-a3bf-8dcb5e61a2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df213a0cec401dcaa2ee4f17c496ccee078307ce29ef0fd713573e1dcf5126d4,PodSandboxId:c8a2cae7393c2eef4bc0ef5e0d87510f0b0ede8f9451b23ddd753d9dba8c541b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762593845773945600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397e6738510954af4976d2bfb3a2dded,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4832bb463f02866da67b49d0595420ffc9da2d9d96717c9024ae7a75f9211b2b,PodSandboxId:8bfa11b70b6563131b8ab6e316a8ecbf837b8ad1c7c77c22800859166d52ae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762593845775317362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8696207690252224986eef7db14dac,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4178d592c59458f8094d38162349a6d7faa1f447c63e6a850ca27e49828e939,PodSandboxId:cfcbb256283a15d833c50a2edd847ae9f2b95f9e11897caa7b8800e2379f8579,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762593845760643802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e26cbb7967e704fd759e12cbadd5cf7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d17d83a970551ba61ff67dabbe607e7f49955db66e2d134f2a0913158589b6c,PodSandboxId:2b2315ae2344818da498666ca94a8ae7dbaadeeb1a8bfcd8886102012d1f75b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762593845753859199,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393aea9ede9c675cf9e535bb7303288,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bd133e0-db8b-4989-9c7c-424f52aab77b name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.115484733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ac9f6c2-b464-4097-955a-31ddcae97f7c name=/runtime.v1.RuntimeService/Version
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.115572818Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ac9f6c2-b464-4097-955a-31ddcae97f7c name=/runtime.v1.RuntimeService/Version
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.117768890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fa2b6e5-5e67-4a17-aa5a-bc8e30fc8a35 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.119861703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593864119775433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fa2b6e5-5e67-4a17-aa5a-bc8e30fc8a35 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.121261045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75ddbcb2-da1f-4578-bbd6-ea681074df87 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.121334410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75ddbcb2-da1f-4578-bbd6-ea681074df87 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.121530207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be155cd1c3906f0a3bdf58a6d7000ed19edfc9f1d5fc94a040b0024fec972fe6,PodSandboxId:c5bd81f48f3b4a8d77c28b62cbb77ea7ecf1ac627774779f163edd5ee3d05356,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762593852975511263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jkd9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7d10736-5133-4cd1-82e6-581c5a7536c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b8bcf7752d8d432e676de9fea60b264a050680c90220e9acd0c82d768bb1e,PodSandboxId:18ef29ac3dbcaf18248364f63de277aa8394434c29809655a022b7165f64f702,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762593849339997349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc6kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af732507993a31be92419ffe871b9f6059a962a65a29055deb608d1d99fbcbcc,PodSandboxId:395015af30df14199f5c140d04ed09c8814872e96daef43953241ec8020376b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593849333037346,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
130744-2885-45df-a3bf-8dcb5e61a2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df213a0cec401dcaa2ee4f17c496ccee078307ce29ef0fd713573e1dcf5126d4,PodSandboxId:c8a2cae7393c2eef4bc0ef5e0d87510f0b0ede8f9451b23ddd753d9dba8c541b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762593845773945600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397e6738510954af4976d2bfb3a2dded,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4832bb463f02866da67b49d0595420ffc9da2d9d96717c9024ae7a75f9211b2b,PodSandboxId:8bfa11b70b6563131b8ab6e316a8ecbf837b8ad1c7c77c22800859166d52ae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762593845775317362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8696207690252224986eef7db14dac,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4178d592c59458f8094d38162349a6d7faa1f447c63e6a850ca27e49828e939,PodSandboxId:cfcbb256283a15d833c50a2edd847ae9f2b95f9e11897caa7b8800e2379f8579,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762593845760643802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e26cbb7967e704fd759e12cbadd5cf7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d17d83a970551ba61ff67dabbe607e7f49955db66e2d134f2a0913158589b6c,PodSandboxId:2b2315ae2344818da498666ca94a8ae7dbaadeeb1a8bfcd8886102012d1f75b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762593845753859199,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393aea9ede9c675cf9e535bb7303288,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75ddbcb2-da1f-4578-bbd6-ea681074df87 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.139293746Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9b6548d2-b0e9-43ac-ba0f-04357f9d98eb name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.139499284Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c5bd81f48f3b4a8d77c28b62cbb77ea7ecf1ac627774779f163edd5ee3d05356,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-jkd9t,Uid:d7d10736-5133-4cd1-82e6-581c5a7536c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762593852744215490,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-jkd9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7d10736-5133-4cd1-82e6-581c5a7536c3,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-08T09:24:08.901088731Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18ef29ac3dbcaf18248364f63de277aa8394434c29809655a022b7165f64f702,Metadata:&PodSandboxMetadata{Name:kube-proxy-sc6kw,Uid:c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1762593849225522272,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sc6kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-08T09:24:08.901084621Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:395015af30df14199f5c140d04ed09c8814872e96daef43953241ec8020376b0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:61130744-2885-45df-a3bf-8dcb5e61a2e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762593849209151735,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61130744-2885-45df-a3bf-8dcb
5e61a2e7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-08T09:24:08.901087148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b2315ae2344818da498666ca94a8ae7dbaadeeb1a8bfcd8886102012d1f75b2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-803502,Uid:c393aea
9ede9c675cf9e535bb7303288,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762593845503922058,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393aea9ede9c675cf9e535bb7303288,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c393aea9ede9c675cf9e535bb7303288,kubernetes.io/config.seen: 2025-11-08T09:24:03.896914560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cfcbb256283a15d833c50a2edd847ae9f2b95f9e11897caa7b8800e2379f8579,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-803502,Uid:e26cbb7967e704fd759e12cbadd5cf7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762593845501565068,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-803502,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: e26cbb7967e704fd759e12cbadd5cf7b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e26cbb7967e704fd759e12cbadd5cf7b,kubernetes.io/config.seen: 2025-11-08T09:24:03.896913069Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c8a2cae7393c2eef4bc0ef5e0d87510f0b0ede8f9451b23ddd753d9dba8c541b,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-803502,Uid:397e6738510954af4976d2bfb3a2dded,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762593845499978246,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397e6738510954af4976d2bfb3a2dded,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.52:2379,kubernetes.io/config.hash: 397e6738510954af4976d2bfb3a2dded,kubernetes.io/config.seen: 2025-11-08T09:
24:03.964670345Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bfa11b70b6563131b8ab6e316a8ecbf837b8ad1c7c77c22800859166d52ae9e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-803502,Uid:da8696207690252224986eef7db14dac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762593845492833352,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8696207690252224986eef7db14dac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.52:8443,kubernetes.io/config.hash: da8696207690252224986eef7db14dac,kubernetes.io/config.seen: 2025-11-08T09:24:03.896908334Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9b6548d2-b0e9-43ac-ba0f-04357f9d98eb name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.140952905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0db65a-c009-4819-aa2f-87ceed7dbce8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.141020298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0db65a-c009-4819-aa2f-87ceed7dbce8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:24:24 test-preload-803502 crio[845]: time="2025-11-08 09:24:24.141173466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be155cd1c3906f0a3bdf58a6d7000ed19edfc9f1d5fc94a040b0024fec972fe6,PodSandboxId:c5bd81f48f3b4a8d77c28b62cbb77ea7ecf1ac627774779f163edd5ee3d05356,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762593852975511263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jkd9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7d10736-5133-4cd1-82e6-581c5a7536c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b8bcf7752d8d432e676de9fea60b264a050680c90220e9acd0c82d768bb1e,PodSandboxId:18ef29ac3dbcaf18248364f63de277aa8394434c29809655a022b7165f64f702,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762593849339997349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc6kw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af732507993a31be92419ffe871b9f6059a962a65a29055deb608d1d99fbcbcc,PodSandboxId:395015af30df14199f5c140d04ed09c8814872e96daef43953241ec8020376b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593849333037346,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
130744-2885-45df-a3bf-8dcb5e61a2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df213a0cec401dcaa2ee4f17c496ccee078307ce29ef0fd713573e1dcf5126d4,PodSandboxId:c8a2cae7393c2eef4bc0ef5e0d87510f0b0ede8f9451b23ddd753d9dba8c541b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762593845773945600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397e6738510954af4976d2bfb3a2dded,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4832bb463f02866da67b49d0595420ffc9da2d9d96717c9024ae7a75f9211b2b,PodSandboxId:8bfa11b70b6563131b8ab6e316a8ecbf837b8ad1c7c77c22800859166d52ae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762593845775317362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8696207690252224986eef7db14dac,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4178d592c59458f8094d38162349a6d7faa1f447c63e6a850ca27e49828e939,PodSandboxId:cfcbb256283a15d833c50a2edd847ae9f2b95f9e11897caa7b8800e2379f8579,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762593845760643802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e26cbb7967e704fd759e12cbadd5cf7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d17d83a970551ba61ff67dabbe607e7f49955db66e2d134f2a0913158589b6c,PodSandboxId:2b2315ae2344818da498666ca94a8ae7dbaadeeb1a8bfcd8886102012d1f75b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762593845753859199,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-803502,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393aea9ede9c675cf9e535bb7303288,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c0db65a-c009-4819-aa2f-87ceed7dbce8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be155cd1c3906       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   c5bd81f48f3b4       coredns-668d6bf9bc-jkd9t
	592b8bcf7752d       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   18ef29ac3dbca       kube-proxy-sc6kw
	af732507993a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   395015af30df1       storage-provisioner
	4832bb463f028       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   8bfa11b70b656       kube-apiserver-test-preload-803502
	df213a0cec401       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   c8a2cae7393c2       etcd-test-preload-803502
	c4178d592c594       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   cfcbb256283a1       kube-controller-manager-test-preload-803502
	4d17d83a97055       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   2b2315ae23448       kube-scheduler-test-preload-803502
	
	
	==> coredns [be155cd1c3906f0a3bdf58a6d7000ed19edfc9f1d5fc94a040b0024fec972fe6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59035 - 62924 "HINFO IN 5404115674056974478.4822850925148971541. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042144003s
	
	
	==> describe nodes <==
	Name:               test-preload-803502
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-803502
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=test-preload-803502
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_22_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:22:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-803502
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:24:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:24:10 +0000   Sat, 08 Nov 2025 09:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:24:10 +0000   Sat, 08 Nov 2025 09:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:24:10 +0000   Sat, 08 Nov 2025 09:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:24:10 +0000   Sat, 08 Nov 2025 09:24:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    test-preload-803502
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4106e967a8b44b4e9046028947cfb1b3
	  System UUID:                4106e967-a8b4-4b4e-9046-028947cfb1b3
	  Boot ID:                    2639e7b6-3eb5-438e-b994-85258bac50f8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-jkd9t                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     109s
	  kube-system                 etcd-test-preload-803502                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         113s
	  kube-system                 kube-apiserver-test-preload-803502             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-test-preload-803502    200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-sc6kw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-test-preload-803502             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 107s               kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 2m                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  119s (x8 over 2m)  kubelet          Node test-preload-803502 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 2m)  kubelet          Node test-preload-803502 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x7 over 2m)  kubelet          Node test-preload-803502 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s               kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    113s               kubelet          Node test-preload-803502 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  113s               kubelet          Node test-preload-803502 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     113s               kubelet          Node test-preload-803502 status is now: NodeHasSufficientPID
	  Normal   NodeReady                113s               kubelet          Node test-preload-803502 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  113s               kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           110s               node-controller  Node test-preload-803502 event: Registered Node test-preload-803502 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node test-preload-803502 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node test-preload-803502 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node test-preload-803502 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 16s                kubelet          Node test-preload-803502 has been rebooted, boot id: 2639e7b6-3eb5-438e-b994-85258bac50f8
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-803502 event: Registered Node test-preload-803502 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001403] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001244] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.011803] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083616] kauditd_printk_skb: 4 callbacks suppressed
	[Nov 8 09:24] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.482180] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.027549] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [df213a0cec401dcaa2ee4f17c496ccee078307ce29ef0fd713573e1dcf5126d4] <==
	{"level":"info","ts":"2025-11-08T09:24:06.351561Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26c9414d925de00c","local-member-id":"3baf479dc31b93a9","added-peer-id":"3baf479dc31b93a9","added-peer-peer-urls":["https://192.168.39.52:2380"]}
	{"level":"info","ts":"2025-11-08T09:24:06.358005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-08T09:24:06.359470Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T09:24:06.358107Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26c9414d925de00c","local-member-id":"3baf479dc31b93a9","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:24:06.359798Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-11-08T09:24:06.373761Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2025-11-08T09:24:06.373824Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:24:06.379940Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"3baf479dc31b93a9","initial-advertise-peer-urls":["https://192.168.39.52:2380"],"listen-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.52:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T09:24:06.380072Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T09:24:07.514558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T09:24:07.514594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T09:24:07.514626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgPreVoteResp from 3baf479dc31b93a9 at term 2"}
	{"level":"info","ts":"2025-11-08T09:24:07.514639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T09:24:07.514645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 received MsgVoteResp from 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-11-08T09:24:07.514653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 became leader at term 3"}
	{"level":"info","ts":"2025-11-08T09:24:07.514659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3baf479dc31b93a9 elected leader 3baf479dc31b93a9 at term 3"}
	{"level":"info","ts":"2025-11-08T09:24:07.517484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:24:07.517882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:24:07.518246Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T09:24:07.518296Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T09:24:07.517487Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"3baf479dc31b93a9","local-member-attributes":"{Name:test-preload-803502 ClientURLs:[https://192.168.39.52:2379]}","request-path":"/0/members/3baf479dc31b93a9/attributes","cluster-id":"26c9414d925de00c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T09:24:07.518659Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-08T09:24:07.519043Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-08T09:24:07.519357Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-08T09:24:07.519581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.52:2379"}
	
	
	==> kernel <==
	 09:24:24 up 0 min,  0 users,  load average: 0.94, 0.26, 0.09
	Linux test-preload-803502 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4832bb463f02866da67b49d0595420ffc9da2d9d96717c9024ae7a75f9211b2b] <==
	I1108 09:24:08.740877       1 policy_source.go:240] refreshing policies
	I1108 09:24:08.747113       1 shared_informer.go:320] Caches are synced for configmaps
	I1108 09:24:08.756786       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:24:08.756942       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:24:08.759815       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:24:08.783899       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1108 09:24:08.784022       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1108 09:24:08.784581       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:24:08.784627       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:24:08.784643       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:24:08.784659       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:24:08.792185       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:24:08.799954       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:24:08.816897       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:24:08.816936       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1108 09:24:08.867429       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:24:08.953996       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1108 09:24:09.650039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:24:10.205496       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1108 09:24:10.242978       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1108 09:24:10.267529       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:24:10.277282       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:24:11.999773       1 controller.go:615] quota admission added evaluator for: endpoints
	I1108 09:24:12.099367       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:24:12.299847       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c4178d592c59458f8094d38162349a6d7faa1f447c63e6a850ca27e49828e939] <==
	I1108 09:24:11.952399       1 shared_informer.go:320] Caches are synced for deployment
	I1108 09:24:11.957201       1 shared_informer.go:320] Caches are synced for TTL
	I1108 09:24:11.961540       1 shared_informer.go:320] Caches are synced for expand
	I1108 09:24:11.966945       1 shared_informer.go:320] Caches are synced for resource quota
	I1108 09:24:11.977391       1 shared_informer.go:320] Caches are synced for node
	I1108 09:24:11.977467       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:24:11.977532       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:24:11.977542       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1108 09:24:11.977551       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1108 09:24:11.977735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-803502"
	I1108 09:24:11.980533       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1108 09:24:11.983745       1 shared_informer.go:320] Caches are synced for PVC protection
	I1108 09:24:11.988086       1 shared_informer.go:320] Caches are synced for crt configmap
	I1108 09:24:11.991891       1 shared_informer.go:320] Caches are synced for service account
	I1108 09:24:11.993646       1 shared_informer.go:320] Caches are synced for persistent volume
	I1108 09:24:11.994010       1 shared_informer.go:320] Caches are synced for daemon sets
	I1108 09:24:11.994160       1 shared_informer.go:320] Caches are synced for PV protection
	I1108 09:24:11.994522       1 shared_informer.go:320] Caches are synced for attach detach
	I1108 09:24:11.996616       1 shared_informer.go:320] Caches are synced for stateful set
	I1108 09:24:12.000660       1 shared_informer.go:320] Caches are synced for garbage collector
	I1108 09:24:12.307620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="363.115249ms"
	I1108 09:24:12.308365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="131.519µs"
	I1108 09:24:13.123193       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="68.844µs"
	I1108 09:24:13.382459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.541357ms"
	I1108 09:24:13.384991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="231.758µs"
	
	
	==> kube-proxy [592b8bcf7752d8d432e676de9fea60b264a050680c90220e9acd0c82d768bb1e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1108 09:24:09.532067       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1108 09:24:09.541874       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	E1108 09:24:09.541938       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:24:09.580182       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1108 09:24:09.580234       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 09:24:09.580257       1 server_linux.go:170] "Using iptables Proxier"
	I1108 09:24:09.583141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:24:09.583566       1 server.go:497] "Version info" version="v1.32.0"
	I1108 09:24:09.583607       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:24:09.585229       1 config.go:199] "Starting service config controller"
	I1108 09:24:09.585282       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1108 09:24:09.585308       1 config.go:105] "Starting endpoint slice config controller"
	I1108 09:24:09.585311       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1108 09:24:09.586284       1 config.go:329] "Starting node config controller"
	I1108 09:24:09.586382       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1108 09:24:09.686960       1 shared_informer.go:320] Caches are synced for service config
	I1108 09:24:09.687479       1 shared_informer.go:320] Caches are synced for node config
	I1108 09:24:09.687491       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4d17d83a970551ba61ff67dabbe607e7f49955db66e2d134f2a0913158589b6c] <==
	I1108 09:24:07.190650       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:24:08.693724       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:24:08.693749       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:24:08.693762       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:24:08.693771       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:24:08.797965       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1108 09:24:08.797992       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:24:08.807780       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:24:08.807810       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 09:24:08.809293       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1108 09:24:08.809415       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:24:08.907929       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.869447    1174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.871756    1174 setters.go:602] "Node became not ready" node="test-preload-803502" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T09:24:08Z","lastTransitionTime":"2025-11-08T09:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.894727    1174 apiserver.go:52] "Watching apiserver"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: E1108 09:24:08.903522    1174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-jkd9t" podUID="d7d10736-5133-4cd1-82e6-581c5a7536c3"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.917556    1174 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.951732    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/61130744-2885-45df-a3bf-8dcb5e61a2e7-tmp\") pod \"storage-provisioner\" (UID: \"61130744-2885-45df-a3bf-8dcb5e61a2e7\") " pod="kube-system/storage-provisioner"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.951777    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9-xtables-lock\") pod \"kube-proxy-sc6kw\" (UID: \"c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9\") " pod="kube-system/kube-proxy-sc6kw"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: I1108 09:24:08.951804    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9-lib-modules\") pod \"kube-proxy-sc6kw\" (UID: \"c1a07cc7-5d69-45fb-b9e4-4a9b7a9ebeb9\") " pod="kube-system/kube-proxy-sc6kw"
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: E1108 09:24:08.953068    1174 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 08 09:24:08 test-preload-803502 kubelet[1174]: E1108 09:24:08.953657    1174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7d10736-5133-4cd1-82e6-581c5a7536c3-config-volume podName:d7d10736-5133-4cd1-82e6-581c5a7536c3 nodeName:}" failed. No retries permitted until 2025-11-08 09:24:09.453594536 +0000 UTC m=+5.664784858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7d10736-5133-4cd1-82e6-581c5a7536c3-config-volume") pod "coredns-668d6bf9bc-jkd9t" (UID: "d7d10736-5133-4cd1-82e6-581c5a7536c3") : object "kube-system"/"coredns" not registered
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: I1108 09:24:09.077007    1174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-803502"
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: I1108 09:24:09.077062    1174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-803502"
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: I1108 09:24:09.077427    1174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-803502"
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: E1108 09:24:09.091797    1174 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-803502\" already exists" pod="kube-system/kube-scheduler-test-preload-803502"
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: E1108 09:24:09.093077    1174 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-803502\" already exists" pod="kube-system/etcd-test-preload-803502"
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: E1108 09:24:09.094783    1174 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-803502\" already exists" pod="kube-system/kube-apiserver-test-preload-803502"
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: E1108 09:24:09.454955    1174 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 08 09:24:09 test-preload-803502 kubelet[1174]: E1108 09:24:09.455029    1174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7d10736-5133-4cd1-82e6-581c5a7536c3-config-volume podName:d7d10736-5133-4cd1-82e6-581c5a7536c3 nodeName:}" failed. No retries permitted until 2025-11-08 09:24:10.455010628 +0000 UTC m=+6.666200950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7d10736-5133-4cd1-82e6-581c5a7536c3-config-volume") pod "coredns-668d6bf9bc-jkd9t" (UID: "d7d10736-5133-4cd1-82e6-581c5a7536c3") : object "kube-system"/"coredns" not registered
	Nov 08 09:24:10 test-preload-803502 kubelet[1174]: I1108 09:24:10.154770    1174 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 08 09:24:10 test-preload-803502 kubelet[1174]: E1108 09:24:10.461069    1174 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 08 09:24:10 test-preload-803502 kubelet[1174]: E1108 09:24:10.461131    1174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7d10736-5133-4cd1-82e6-581c5a7536c3-config-volume podName:d7d10736-5133-4cd1-82e6-581c5a7536c3 nodeName:}" failed. No retries permitted until 2025-11-08 09:24:12.461117974 +0000 UTC m=+8.672308296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7d10736-5133-4cd1-82e6-581c5a7536c3-config-volume") pod "coredns-668d6bf9bc-jkd9t" (UID: "d7d10736-5133-4cd1-82e6-581c5a7536c3") : object "kube-system"/"coredns" not registered
	Nov 08 09:24:13 test-preload-803502 kubelet[1174]: E1108 09:24:13.982413    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593853981871292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 08 09:24:13 test-preload-803502 kubelet[1174]: E1108 09:24:13.982459    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593853981871292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 08 09:24:23 test-preload-803502 kubelet[1174]: E1108 09:24:23.984940    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593863983469754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 08 09:24:23 test-preload-803502 kubelet[1174]: E1108 09:24:23.984982    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593863983469754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [af732507993a31be92419ffe871b9f6059a962a65a29055deb608d1d99fbcbcc] <==
	I1108 09:24:09.429102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-803502 -n test-preload-803502
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-803502 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-803502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-803502
--- FAIL: TestPreload (165.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-022459 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-022459 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.778400741s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-022459] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-022459" primary control-plane node in "pause-022459" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-022459" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:32:35.805580   41684 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:32:35.805846   41684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:32:35.805857   41684 out.go:374] Setting ErrFile to fd 2...
	I1108 09:32:35.805861   41684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:32:35.806072   41684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:32:35.806487   41684 out.go:368] Setting JSON to false
	I1108 09:32:35.807410   41684 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4497,"bootTime":1762589859,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:32:35.807491   41684 start.go:143] virtualization: kvm guest
	I1108 09:32:35.809164   41684 out.go:179] * [pause-022459] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:32:35.810584   41684 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:32:35.810594   41684 notify.go:221] Checking for updates...
	I1108 09:32:35.812823   41684 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:32:35.814058   41684 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:32:35.815206   41684 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:32:35.816351   41684 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:32:35.817425   41684 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:32:35.818822   41684 config.go:182] Loaded profile config "pause-022459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:32:35.819423   41684 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:32:35.868164   41684 out.go:179] * Using the kvm2 driver based on existing profile
	I1108 09:32:35.869461   41684 start.go:309] selected driver: kvm2
	I1108 09:32:35.869480   41684 start.go:930] validating driver "kvm2" against &{Name:pause-022459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-022459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:32:35.869629   41684 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:32:35.870599   41684 cni.go:84] Creating CNI manager for ""
	I1108 09:32:35.870668   41684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:32:35.870742   41684 start.go:353] cluster config:
	{Name:pause-022459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-022459 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:32:35.870872   41684 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:32:35.872441   41684 out.go:179] * Starting "pause-022459" primary control-plane node in "pause-022459" cluster
	I1108 09:32:35.873527   41684 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:32:35.873563   41684 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:32:35.873577   41684 cache.go:59] Caching tarball of preloaded images
	I1108 09:32:35.873686   41684 preload.go:233] Found /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:32:35.873698   41684 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:32:35.873824   41684 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/config.json ...
	I1108 09:32:35.874079   41684 start.go:360] acquireMachinesLock for pause-022459: {Name:mk17d57b1ca3eb78588f74785db7bcd997a10966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 09:32:44.230569   41684 start.go:364] duration metric: took 8.356395335s to acquireMachinesLock for "pause-022459"
	I1108 09:32:44.230622   41684 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:32:44.230629   41684 fix.go:54] fixHost starting: 
	I1108 09:32:44.233061   41684 fix.go:112] recreateIfNeeded on pause-022459: state=Running err=<nil>
	W1108 09:32:44.233103   41684 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:32:44.234603   41684 out.go:252] * Updating the running kvm2 "pause-022459" VM ...
	I1108 09:32:44.234631   41684 machine.go:94] provisionDockerMachine start ...
	I1108 09:32:44.239030   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.239667   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:44.239712   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.239984   41684 main.go:143] libmachine: Using SSH client type: native
	I1108 09:32:44.240252   41684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1108 09:32:44.240267   41684 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:32:44.367373   41684 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-022459
	
	I1108 09:32:44.367417   41684 buildroot.go:166] provisioning hostname "pause-022459"
	I1108 09:32:44.371271   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.371812   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:44.371847   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.372070   41684 main.go:143] libmachine: Using SSH client type: native
	I1108 09:32:44.372369   41684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1108 09:32:44.372391   41684 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-022459 && echo "pause-022459" | sudo tee /etc/hostname
	I1108 09:32:44.518138   41684 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-022459
	
	I1108 09:32:44.521532   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.522117   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:44.522174   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.522460   41684 main.go:143] libmachine: Using SSH client type: native
	I1108 09:32:44.522759   41684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1108 09:32:44.522789   41684 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-022459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-022459/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-022459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:32:44.651653   41684 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:32:44.651684   41684 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5845/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5845/.minikube}
	I1108 09:32:44.651706   41684 buildroot.go:174] setting up certificates
	I1108 09:32:44.651716   41684 provision.go:84] configureAuth start
	I1108 09:32:44.655158   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.655780   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:44.655820   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.658296   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.658744   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:44.658770   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.658948   41684 provision.go:143] copyHostCerts
	I1108 09:32:44.659013   41684 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem, removing ...
	I1108 09:32:44.659035   41684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem
	I1108 09:32:44.659106   41684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem (1082 bytes)
	I1108 09:32:44.659242   41684 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem, removing ...
	I1108 09:32:44.659254   41684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem
	I1108 09:32:44.659291   41684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem (1123 bytes)
	I1108 09:32:44.659382   41684 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem, removing ...
	I1108 09:32:44.659393   41684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem
	I1108 09:32:44.659424   41684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem (1675 bytes)
	I1108 09:32:44.659541   41684 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem org=jenkins.pause-022459 san=[127.0.0.1 192.168.39.96 localhost minikube pause-022459]
	I1108 09:32:44.864845   41684 provision.go:177] copyRemoteCerts
	I1108 09:32:44.864910   41684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:32:44.868131   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.868634   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:44.868670   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:44.868820   41684 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/pause-022459/id_rsa Username:docker}
	I1108 09:32:44.966472   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:32:45.003691   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 09:32:45.043258   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:32:45.081139   41684 provision.go:87] duration metric: took 429.411425ms to configureAuth
	I1108 09:32:45.081165   41684 buildroot.go:189] setting minikube options for container-runtime
	I1108 09:32:45.081362   41684 config.go:182] Loaded profile config "pause-022459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:32:45.084269   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:45.084708   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:45.084736   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:45.084889   41684 main.go:143] libmachine: Using SSH client type: native
	I1108 09:32:45.085114   41684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1108 09:32:45.085140   41684 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:32:50.698643   41684 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:32:50.698672   41684 machine.go:97] duration metric: took 6.464026412s to provisionDockerMachine
	I1108 09:32:50.698687   41684 start.go:293] postStartSetup for "pause-022459" (driver="kvm2")
	I1108 09:32:50.698700   41684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:32:50.698758   41684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:32:50.703879   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:50.704443   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:50.704484   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:50.704904   41684 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/pause-022459/id_rsa Username:docker}
	I1108 09:32:50.806685   41684 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:32:50.812624   41684 info.go:137] Remote host: Buildroot 2025.02
	I1108 09:32:50.812652   41684 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/addons for local assets ...
	I1108 09:32:50.812717   41684 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/files for local assets ...
	I1108 09:32:50.812797   41684 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem -> 97452.pem in /etc/ssl/certs
	I1108 09:32:50.812944   41684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:32:50.831547   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem --> /etc/ssl/certs/97452.pem (1708 bytes)
	I1108 09:32:50.878392   41684 start.go:296] duration metric: took 179.68916ms for postStartSetup
	I1108 09:32:50.878437   41684 fix.go:56] duration metric: took 6.647807814s for fixHost
	I1108 09:32:50.881757   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:50.882198   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:50.882228   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:50.882436   41684 main.go:143] libmachine: Using SSH client type: native
	I1108 09:32:50.882678   41684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1108 09:32:50.882691   41684 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1108 09:32:51.011673   41684 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762594371.008766739
	
	I1108 09:32:51.011698   41684 fix.go:216] guest clock: 1762594371.008766739
	I1108 09:32:51.011709   41684 fix.go:229] Guest: 2025-11-08 09:32:51.008766739 +0000 UTC Remote: 2025-11-08 09:32:50.878442548 +0000 UTC m=+15.133256058 (delta=130.324191ms)
	I1108 09:32:51.011728   41684 fix.go:200] guest clock delta is within tolerance: 130.324191ms
	I1108 09:32:51.011735   41684 start.go:83] releasing machines lock for "pause-022459", held for 6.781132462s
	I1108 09:32:51.015735   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:51.016293   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:51.016335   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:51.017097   41684 ssh_runner.go:195] Run: cat /version.json
	I1108 09:32:51.017304   41684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:32:51.021594   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:51.021936   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:51.022172   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:51.022214   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:51.022467   41684 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/pause-022459/id_rsa Username:docker}
	I1108 09:32:51.022476   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:51.022566   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:51.022977   41684 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/pause-022459/id_rsa Username:docker}
	I1108 09:32:51.115023   41684 ssh_runner.go:195] Run: systemctl --version
	I1108 09:32:51.145768   41684 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:32:51.313263   41684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:32:51.324009   41684 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:32:51.324080   41684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:32:51.337005   41684 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:32:51.337033   41684 start.go:496] detecting cgroup driver to use...
	I1108 09:32:51.337098   41684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:32:51.365395   41684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:32:51.384244   41684 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:32:51.384323   41684 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:32:51.409382   41684 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:32:51.430386   41684 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:32:51.686402   41684 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:32:51.888408   41684 docker.go:234] disabling docker service ...
	I1108 09:32:51.888477   41684 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:32:51.919287   41684 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:32:51.936772   41684 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:32:52.128910   41684 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:32:52.319133   41684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:32:52.338071   41684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:32:52.363062   41684 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:32:52.363138   41684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.377607   41684 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:32:52.377677   41684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.393195   41684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.411366   41684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.425701   41684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:32:52.441098   41684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.458254   41684 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.474485   41684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:32:52.491133   41684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:32:52.505117   41684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:32:52.518574   41684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:32:52.707142   41684 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:32:53.268254   41684 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:32:53.268333   41684 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:32:53.274896   41684 start.go:564] Will wait 60s for crictl version
	I1108 09:32:53.274980   41684 ssh_runner.go:195] Run: which crictl
	I1108 09:32:53.280197   41684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 09:32:53.322948   41684 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1108 09:32:53.323037   41684 ssh_runner.go:195] Run: crio --version
	I1108 09:32:53.358773   41684 ssh_runner.go:195] Run: crio --version
	I1108 09:32:53.397642   41684 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1108 09:32:53.401569   41684 main.go:143] libmachine: domain pause-022459 has defined MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:53.402065   41684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:06:35", ip: ""} in network mk-pause-022459: {Iface:virbr1 ExpiryTime:2025-11-08 10:31:31 +0000 UTC Type:0 Mac:52:54:00:d9:06:35 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:pause-022459 Clientid:01:52:54:00:d9:06:35}
	I1108 09:32:53.402090   41684 main.go:143] libmachine: domain pause-022459 has defined IP address 192.168.39.96 and MAC address 52:54:00:d9:06:35 in network mk-pause-022459
	I1108 09:32:53.402301   41684 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 09:32:53.407435   41684 kubeadm.go:884] updating cluster {Name:pause-022459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-022459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:32:53.407588   41684 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:32:53.407631   41684 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:32:53.455246   41684 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:32:53.455268   41684 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:32:53.455320   41684 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:32:53.500150   41684 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:32:53.500169   41684 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:32:53.500176   41684 kubeadm.go:935] updating node { 192.168.39.96 8443 v1.34.1 crio true true} ...
	I1108 09:32:53.500258   41684 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-022459 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-022459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:32:53.500313   41684 ssh_runner.go:195] Run: crio config
	I1108 09:32:53.550747   41684 cni.go:84] Creating CNI manager for ""
	I1108 09:32:53.550767   41684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:32:53.550781   41684 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:32:53.550808   41684 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.96 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-022459 NodeName:pause-022459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:32:53.550954   41684 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-022459"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:32:53.551025   41684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:32:53.566639   41684 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:32:53.566713   41684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:32:53.580661   41684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1108 09:32:53.611174   41684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:32:53.638227   41684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 09:32:53.660434   41684 ssh_runner.go:195] Run: grep 192.168.39.96	control-plane.minikube.internal$ /etc/hosts
	I1108 09:32:53.665146   41684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:32:53.859558   41684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:32:53.881379   41684 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459 for IP: 192.168.39.96
	I1108 09:32:53.881411   41684 certs.go:195] generating shared ca certs ...
	I1108 09:32:53.881429   41684 certs.go:227] acquiring lock for ca certs: {Name:mkf9b4566d45fc9bb33b533126e27cef8349b756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:32:53.881635   41684 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key
	I1108 09:32:53.881681   41684 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key
	I1108 09:32:53.881693   41684 certs.go:257] generating profile certs ...
	I1108 09:32:53.881813   41684 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/client.key
	I1108 09:32:53.881907   41684 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/apiserver.key.34ed9007
	I1108 09:32:53.881959   41684 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/proxy-client.key
	I1108 09:32:53.882078   41684 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/9745.pem (1338 bytes)
	W1108 09:32:53.882113   41684 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5845/.minikube/certs/9745_empty.pem, impossibly tiny 0 bytes
	I1108 09:32:53.882122   41684 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:32:53.882148   41684 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:32:53.882168   41684 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:32:53.882191   41684 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem (1675 bytes)
	I1108 09:32:53.882229   41684 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem (1708 bytes)
	I1108 09:32:53.883515   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:32:53.919835   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:32:54.016479   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:32:54.104687   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:32:54.172082   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:32:54.253543   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:32:54.370459   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:32:54.468517   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:32:54.549393   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/9745.pem --> /usr/share/ca-certificates/9745.pem (1338 bytes)
	I1108 09:32:54.611712   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem --> /usr/share/ca-certificates/97452.pem (1708 bytes)
	I1108 09:32:54.679168   41684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:32:54.743486   41684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:32:54.790294   41684 ssh_runner.go:195] Run: openssl version
	I1108 09:32:54.802870   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9745.pem && ln -fs /usr/share/ca-certificates/9745.pem /etc/ssl/certs/9745.pem"
	I1108 09:32:54.834164   41684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9745.pem
	I1108 09:32:54.850635   41684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:38 /usr/share/ca-certificates/9745.pem
	I1108 09:32:54.850696   41684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9745.pem
	I1108 09:32:54.867805   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9745.pem /etc/ssl/certs/51391683.0"
	I1108 09:32:54.907732   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97452.pem && ln -fs /usr/share/ca-certificates/97452.pem /etc/ssl/certs/97452.pem"
	I1108 09:32:54.942152   41684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97452.pem
	I1108 09:32:54.954106   41684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:38 /usr/share/ca-certificates/97452.pem
	I1108 09:32:54.954172   41684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97452.pem
	I1108 09:32:54.965077   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/97452.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:32:55.032382   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:32:55.067989   41684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:32:55.084407   41684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:32:55.084489   41684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:32:55.106329   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:32:55.141399   41684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:32:55.155080   41684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:32:55.178525   41684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:32:55.202094   41684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:32:55.216522   41684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:32:55.235190   41684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:32:55.251666   41684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:32:55.274908   41684 kubeadm.go:401] StartCluster: {Name:pause-022459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-022459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:32:55.275067   41684 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:32:55.275164   41684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:32:55.465981   41684 cri.go:89] found id: "4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c"
	I1108 09:32:55.466007   41684 cri.go:89] found id: "356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987"
	I1108 09:32:55.466013   41684 cri.go:89] found id: "885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70"
	I1108 09:32:55.466017   41684 cri.go:89] found id: "4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6"
	I1108 09:32:55.466021   41684 cri.go:89] found id: "b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d"
	I1108 09:32:55.466026   41684 cri.go:89] found id: "91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310"
	I1108 09:32:55.466030   41684 cri.go:89] found id: "e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba"
	I1108 09:32:55.466034   41684 cri.go:89] found id: "9c1781af9d254003f57431a008dbd305d695c4a7b22b0394512a38a14f1626b0"
	I1108 09:32:55.466038   41684 cri.go:89] found id: "866b2618f8007747a26d56b7a72550d44a773826497b64294318b5150163a926"
	I1108 09:32:55.466058   41684 cri.go:89] found id: "862a4f78a966af1058bd7d1a3a1d9e673d07c78f7f425c8a6930f299a7a66d89"
	I1108 09:32:55.466067   41684 cri.go:89] found id: "2a139fae3fa4ce1ab8e9e90adddcc72a81e6c1c18f69c7f78024ef18b33d9524"
	I1108 09:32:55.466070   41684 cri.go:89] found id: ""
	I1108 09:32:55.466138   41684 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-022459 -n pause-022459
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-022459 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-022459 logs -n 25: (1.952804489s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                               ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-615410 sudo systemctl cat kubelet --no-pager                                                                                                             │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                              │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/kubernetes/kubelet.conf                                                                                                             │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /var/lib/kubelet/config.yaml                                                                                                             │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl status docker --all --full --no-pager                                                                                              │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl cat docker --no-pager                                                                                                              │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/docker/daemon.json                                                                                                                  │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo docker system info                                                                                                                           │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl cat cri-docker --no-pager                                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cri-dockerd --version                                                                                                                        │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl status containerd --all --full --no-pager                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl cat containerd --no-pager                                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/containerd/config.toml                                                                                                              │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo containerd config dump                                                                                                                       │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl status crio --all --full --no-pager                                                                                                │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl cat crio --no-pager                                                                                                                │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo crio config                                                                                                                                  │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p auto-615410                                                                                                                                                   │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ -p custom-flannel-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio │ custom-flannel-615410 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p kindnet-615410 pgrep -a kubelet                                                                                                                               │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:06
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:06.059883   42540 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:06.060185   42540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:06.060196   42540 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:06.060202   42540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:06.060478   42540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:33:06.061186   42540 out.go:368] Setting JSON to false
	I1108 09:33:06.062389   42540 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4527,"bootTime":1762589859,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:33:06.062520   42540 start.go:143] virtualization: kvm guest
	I1108 09:33:06.064410   42540 out.go:179] * [custom-flannel-615410] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:33:06.065667   42540 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:33:06.065673   42540 notify.go:221] Checking for updates...
	I1108 09:33:06.066773   42540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:06.068297   42540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:33:06.069338   42540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:33:06.070441   42540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:33:06.071474   42540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:06.073118   42540 config.go:182] Loaded profile config "calico-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:06.073273   42540 config.go:182] Loaded profile config "guest-788314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 09:33:06.073393   42540 config.go:182] Loaded profile config "kindnet-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:06.073605   42540 config.go:182] Loaded profile config "pause-022459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:06.073718   42540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:06.117072   42540 out.go:179] * Using the kvm2 driver based on user configuration
	I1108 09:33:06.118135   42540 start.go:309] selected driver: kvm2
	I1108 09:33:06.118158   42540 start.go:930] validating driver "kvm2" against <nil>
	I1108 09:33:06.118173   42540 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:06.119253   42540 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:33:06.119622   42540 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:33:06.119666   42540 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 09:33:06.119684   42540 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1108 09:33:06.119744   42540 start.go:353] cluster config:
	{Name:custom-flannel-615410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-615410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:33:06.119897   42540 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:33:06.121379   42540 out.go:179] * Starting "custom-flannel-615410" primary control-plane node in "custom-flannel-615410" cluster
	I1108 09:33:06.122404   42540 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:06.122450   42540 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:33:06.122476   42540 cache.go:59] Caching tarball of preloaded images
	I1108 09:33:06.122598   42540 preload.go:233] Found /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:33:06.122613   42540 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:33:06.122733   42540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/config.json ...
	I1108 09:33:06.122758   42540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/config.json: {Name:mkd2913ce083f135dcd902d780686a82341b48f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:06.122935   42540 start.go:360] acquireMachinesLock for custom-flannel-615410: {Name:mk17d57b1ca3eb78588f74785db7bcd997a10966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 09:33:06.122991   42540 start.go:364] duration metric: took 32.271µs to acquireMachinesLock for "custom-flannel-615410"
	I1108 09:33:06.123019   42540 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-615410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-615410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:06.123088   42540 start.go:125] createHost starting for "" (driver="kvm2")
	I1108 09:33:04.492430   41318 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:33:04.492454   41318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1108 09:33:04.524908   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:33:06.444126   41318 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.91918695s)
	I1108 09:33:06.444176   41318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:33:06.444269   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:06.444285   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-615410 minikube.k8s.io/updated_at=2025_11_08T09_33_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=calico-615410 minikube.k8s.io/primary=true
	I1108 09:33:06.470577   41318 ops.go:34] apiserver oom_adj: -16
	I1108 09:33:06.659022   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:07.159957   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:07.659130   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:08.159199   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:08.659726   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:09.159355   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:09.659961   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:10.112491   41318 kubeadm.go:1114] duration metric: took 3.668289508s to wait for elevateKubeSystemPrivileges
	I1108 09:33:10.112550   41318 kubeadm.go:403] duration metric: took 18.858855921s to StartCluster
	I1108 09:33:10.112572   41318 settings.go:142] acquiring lock: {Name:mk0d0617389eeb9d724259ab95a170c08eef0474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:10.112658   41318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:33:10.114312   41318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:10.114587   41318 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.75 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:10.114780   41318 config.go:182] Loaded profile config "calico-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:10.114833   41318 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:33:10.114913   41318 addons.go:70] Setting storage-provisioner=true in profile "calico-615410"
	I1108 09:33:10.114941   41318 addons.go:239] Setting addon storage-provisioner=true in "calico-615410"
	I1108 09:33:10.114969   41318 host.go:66] Checking if "calico-615410" exists ...
	I1108 09:33:10.115715   41318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:33:10.115816   41318 addons.go:70] Setting default-storageclass=true in profile "calico-615410"
	I1108 09:33:10.115839   41318 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-615410"
	I1108 09:33:10.115990   41318 out.go:179] * Verifying Kubernetes components...
	I1108 09:33:10.116933   41318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:33:10.119161   41318 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:33:06.528714   41684 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c 356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987 885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70 4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6 b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba 9c1781af9d254003f57431a008dbd305d695c4a7b22b0394512a38a14f1626b0 866b2618f8007747a26d56b7a72550d44a773826497b64294318b5150163a926 862a4f78a966af1058bd7d1a3a1d9e673d07c78f7f425c8a6930f299a7a66d89 2a139fae3fa4ce1ab8e9e90adddcc72a81e6c1c18f69c7f78024ef18b33d9524: (10.82053907s)
	W1108 09:33:06.528795   41684 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c 356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987 885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70 4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6 b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba 9c1781af9d254003f57431a008dbd305d695c4a7b22b0394512a38a14f1626b0 866b2618f8007747a26d56b7a72550d44a773826497b64294318b5150163a926 862a4f78a966af1058bd7d1a3a1d9e673d07c78f7f425c8a6930f299a7a66d89 2a139fae3fa4ce1ab8e9e90adddcc72a81e6c1c18f69c7f78024ef18b33d9524: Process exited with status 1
	stdout:
	4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c
	356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987
	885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70
	4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6
	b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d
	
	stderr:
	E1108 09:33:06.520360    3569 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310\": container with ID starting with 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 not found: ID does not exist" containerID="91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310"
	time="2025-11-08T09:33:06Z" level=fatal msg="stopping the container \"91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310\": rpc error: code = NotFound desc = could not find container \"91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310\": container with ID starting with 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 not found: ID does not exist"
	I1108 09:33:06.528858   41684 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 09:33:06.589746   41684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:33:06.608040   41684 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5623 Nov  8 09:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5641 Nov  8 09:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Nov  8 09:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5589 Nov  8 09:31 /etc/kubernetes/scheduler.conf
	
	I1108 09:33:06.608124   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:33:06.622856   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:33:06.635781   41684 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:33:06.635858   41684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:33:06.652359   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:33:06.669416   41684 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:33:06.669512   41684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:33:06.683991   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:33:06.696865   41684 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:33:06.696924   41684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:33:06.713075   41684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:33:06.725978   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:06.788603   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:08.522641   41684 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.73399969s)
	I1108 09:33:08.522723   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:08.947837   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:09.065234   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:09.179171   41684 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:33:09.179269   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:09.680247   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:10.179665   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:10.268169   41684 api_server.go:72] duration metric: took 1.089007176s to wait for apiserver process to appear ...
	I1108 09:33:10.268199   41684 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:33:10.268220   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:10.268775   41684 api_server.go:269] stopped: https://192.168.39.96:8443/healthz: Get "https://192.168.39.96:8443/healthz": dial tcp 192.168.39.96:8443: connect: connection refused
	I1108 09:33:10.770010   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:06.124626   42540 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1108 09:33:06.124890   42540 start.go:159] libmachine.API.Create for "custom-flannel-615410" (driver="kvm2")
	I1108 09:33:06.124927   42540 client.go:173] LocalClient.Create starting
	I1108 09:33:06.124996   42540 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem
	I1108 09:33:06.125043   42540 main.go:143] libmachine: Decoding PEM data...
	I1108 09:33:06.125065   42540 main.go:143] libmachine: Parsing certificate...
	I1108 09:33:06.125127   42540 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem
	I1108 09:33:06.125153   42540 main.go:143] libmachine: Decoding PEM data...
	I1108 09:33:06.125168   42540 main.go:143] libmachine: Parsing certificate...
	I1108 09:33:06.125624   42540 main.go:143] libmachine: creating domain...
	I1108 09:33:06.125646   42540 main.go:143] libmachine: creating network...
	I1108 09:33:06.127385   42540 main.go:143] libmachine: found existing default network
	I1108 09:33:06.127705   42540 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 09:33:06.128911   42540 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3e:18:b4} reservation:<nil>}
	I1108 09:33:06.129916   42540 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a2:80:b6} reservation:<nil>}
	I1108 09:33:06.130621   42540 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:14:52} reservation:<nil>}
	I1108 09:33:06.131859   42540 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d2a3e0}
	I1108 09:33:06.131955   42540 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-custom-flannel-615410</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 09:33:06.138619   42540 main.go:143] libmachine: creating private network mk-custom-flannel-615410 192.168.72.0/24...
	I1108 09:33:06.231047   42540 main.go:143] libmachine: private network mk-custom-flannel-615410 192.168.72.0/24 created
	I1108 09:33:06.231424   42540 main.go:143] libmachine: <network>
	  <name>mk-custom-flannel-615410</name>
	  <uuid>370a223f-7a96-4fb6-b0e9-86c1871fca6f</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:29:c0:41'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 09:33:06.231465   42540 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410 ...
	I1108 09:33:06.231615   42540 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1108 09:33:06.231629   42540 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:33:06.231730   42540 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21866-5845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1108 09:33:06.477074   42540 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa...
	I1108 09:33:06.561988   42540 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/custom-flannel-615410.rawdisk...
	I1108 09:33:06.562064   42540 main.go:143] libmachine: Writing magic tar header
	I1108 09:33:06.562093   42540 main.go:143] libmachine: Writing SSH key tar header
	I1108 09:33:06.562181   42540 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410 ...
	I1108 09:33:06.562305   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410
	I1108 09:33:06.562354   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410 (perms=drwx------)
	I1108 09:33:06.562375   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube/machines
	I1108 09:33:06.562389   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube/machines (perms=drwxr-xr-x)
	I1108 09:33:06.562404   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:33:06.562422   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube (perms=drwxr-xr-x)
	I1108 09:33:06.562430   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845
	I1108 09:33:06.562438   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845 (perms=drwxrwxr-x)
	I1108 09:33:06.562448   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1108 09:33:06.562457   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1108 09:33:06.562463   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1108 09:33:06.562473   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1108 09:33:06.562482   42540 main.go:143] libmachine: checking permissions on dir: /home
	I1108 09:33:06.562491   42540 main.go:143] libmachine: skipping /home - not owner
	I1108 09:33:06.562508   42540 main.go:143] libmachine: defining domain...
	I1108 09:33:06.563873   42540 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>custom-flannel-615410</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/custom-flannel-615410.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-custom-flannel-615410'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1108 09:33:06.569099   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:d5:95:6f in network default
	I1108 09:33:06.569832   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:06.569849   42540 main.go:143] libmachine: starting domain...
	I1108 09:33:06.569853   42540 main.go:143] libmachine: ensuring networks are active...
	I1108 09:33:06.570677   42540 main.go:143] libmachine: Ensuring network default is active
	I1108 09:33:06.571033   42540 main.go:143] libmachine: Ensuring network mk-custom-flannel-615410 is active
	I1108 09:33:06.571665   42540 main.go:143] libmachine: getting domain XML...
	I1108 09:33:06.572896   42540 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>custom-flannel-615410</name>
	  <uuid>b92ee915-57b3-40ae-b0f3-23047055b527</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/custom-flannel-615410.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a1:36:61'/>
	      <source network='mk-custom-flannel-615410'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d5:95:6f'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1108 09:33:08.028531   42540 main.go:143] libmachine: waiting for domain to start...
	I1108 09:33:08.030073   42540 main.go:143] libmachine: domain is now running
	I1108 09:33:08.030096   42540 main.go:143] libmachine: waiting for IP...
	I1108 09:33:08.030852   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.031560   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.031575   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.031932   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.031970   42540 retry.go:31] will retry after 258.718043ms: waiting for domain to come up
	I1108 09:33:08.292647   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.293457   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.293485   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.293977   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.294021   42540 retry.go:31] will retry after 377.236405ms: waiting for domain to come up
	I1108 09:33:08.673581   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.674426   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.674447   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.674905   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.674947   42540 retry.go:31] will retry after 299.001423ms: waiting for domain to come up
	I1108 09:33:08.975748   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.976550   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.976569   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.977009   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.977066   42540 retry.go:31] will retry after 419.143ms: waiting for domain to come up
	I1108 09:33:09.397797   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:09.398674   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:09.398694   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:09.399153   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:09.399200   42540 retry.go:31] will retry after 523.040388ms: waiting for domain to come up
	I1108 09:33:09.924075   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:09.925050   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:09.925076   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:09.925518   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:09.925562   42540 retry.go:31] will retry after 686.008423ms: waiting for domain to come up
	I1108 09:33:10.613670   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:10.614410   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:10.614429   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:10.614901   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:10.614939   42540 retry.go:31] will retry after 1.11728343s: waiting for domain to come up
	I1108 09:33:10.120193   41318 addons.go:239] Setting addon default-storageclass=true in "calico-615410"
	I1108 09:33:10.120237   41318 host.go:66] Checking if "calico-615410" exists ...
	I1108 09:33:10.120262   41318 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:33:10.120279   41318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:33:10.123094   41318 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:33:10.123114   41318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:33:10.124795   41318 main.go:143] libmachine: domain calico-615410 has defined MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.125720   41318 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:e6:27", ip: ""} in network mk-calico-615410: {Iface:virbr5 ExpiryTime:2025-11-08 10:32:40 +0000 UTC Type:0 Mac:52:54:00:4a:e6:27 Iaid: IPaddr:192.168.83.75 Prefix:24 Hostname:calico-615410 Clientid:01:52:54:00:4a:e6:27}
	I1108 09:33:10.125758   41318 main.go:143] libmachine: domain calico-615410 has defined IP address 192.168.83.75 and MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.125968   41318 sshutil.go:53] new ssh client: &{IP:192.168.83.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/calico-615410/id_rsa Username:docker}
	I1108 09:33:10.126989   41318 main.go:143] libmachine: domain calico-615410 has defined MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.127518   41318 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:e6:27", ip: ""} in network mk-calico-615410: {Iface:virbr5 ExpiryTime:2025-11-08 10:32:40 +0000 UTC Type:0 Mac:52:54:00:4a:e6:27 Iaid: IPaddr:192.168.83.75 Prefix:24 Hostname:calico-615410 Clientid:01:52:54:00:4a:e6:27}
	I1108 09:33:10.127564   41318 main.go:143] libmachine: domain calico-615410 has defined IP address 192.168.83.75 and MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.127770   41318 sshutil.go:53] new ssh client: &{IP:192.168.83.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/calico-615410/id_rsa Username:docker}
	I1108 09:33:10.502775   41318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:33:10.517997   41318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:33:10.780316   41318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:33:10.817984   41318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:33:11.349685   41318 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1108 09:33:11.350904   41318 node_ready.go:35] waiting up to 15m0s for node "calico-615410" to be "Ready" ...
	I1108 09:33:11.895521   41318 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-615410" context rescaled to 1 replicas
	I1108 09:33:12.466687   41318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.686333868s)
	I1108 09:33:12.466764   41318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.648754599s)
	I1108 09:33:12.481691   41318 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:33:12.482674   41318 addons.go:515] duration metric: took 2.36783472s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:33:13.356683   41318 node_ready.go:57] node "calico-615410" has "Ready":"False" status (will retry)
	I1108 09:33:13.460250   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:33:13.460341   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:33:13.460362   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:13.497111   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:33:13.497138   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:33:13.768358   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:13.777618   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:33:13.777746   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:33:14.268357   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:14.275293   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:33:14.275510   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:33:14.769215   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:14.784533   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:33:14.784587   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:33:15.269306   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:15.274742   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 200:
	ok
	I1108 09:33:15.283805   41684 api_server.go:141] control plane version: v1.34.1
	I1108 09:33:15.283834   41684 api_server.go:131] duration metric: took 5.015627913s to wait for apiserver health ...
	I1108 09:33:15.283845   41684 cni.go:84] Creating CNI manager for ""
	I1108 09:33:15.283853   41684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:33:15.288615   41684 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 09:33:15.289863   41684 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 09:33:15.306135   41684 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1108 09:33:15.337591   41684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:33:15.346076   41684 system_pods.go:59] 6 kube-system pods found
	I1108 09:33:15.346117   41684 system_pods.go:61] "coredns-66bc5c9577-bljvk" [ba662ec9-4f89-4b75-ad34-27e5fe5bba61] Running
	I1108 09:33:15.346132   41684 system_pods.go:61] "etcd-pause-022459" [6caec945-cd8e-4d36-9d98-e0346d82f48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:33:15.346141   41684 system_pods.go:61] "kube-apiserver-pause-022459" [952270ad-2035-4f93-b71e-8729e2ac93cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:33:15.346153   41684 system_pods.go:61] "kube-controller-manager-pause-022459" [7516e826-ffd2-41f4-8dec-13d28ac1fcf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:33:15.346162   41684 system_pods.go:61] "kube-proxy-jwkzf" [eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:33:15.346172   41684 system_pods.go:61] "kube-scheduler-pause-022459" [e6a79821-2954-4e41-9cb9-64d610f8cd24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:33:15.346181   41684 system_pods.go:74] duration metric: took 8.561982ms to wait for pod list to return data ...
	I1108 09:33:15.346191   41684 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:33:15.354130   41684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 09:33:15.354230   41684 node_conditions.go:123] node cpu capacity is 2
	I1108 09:33:15.354263   41684 node_conditions.go:105] duration metric: took 8.065549ms to run NodePressure ...
	I1108 09:33:15.354361   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:11.733961   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:11.735487   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:11.735533   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:11.736062   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:11.736102   42540 retry.go:31] will retry after 1.277034818s: waiting for domain to come up
	I1108 09:33:13.015163   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:13.015916   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:13.015937   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:13.016389   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:13.016424   42540 retry.go:31] will retry after 1.387705285s: waiting for domain to come up
	I1108 09:33:14.405813   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:14.406793   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:14.406814   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:14.407288   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:14.407326   42540 retry.go:31] will retry after 1.81408043s: waiting for domain to come up
	I1108 09:33:16.146681   41684 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1108 09:33:16.153742   41684 kubeadm.go:744] kubelet initialised
	I1108 09:33:16.153770   41684 kubeadm.go:745] duration metric: took 7.059023ms waiting for restarted kubelet to initialise ...
	I1108 09:33:16.153788   41684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:33:16.185984   41684 ops.go:34] apiserver oom_adj: -16
	I1108 09:33:16.186006   41684 kubeadm.go:602] duration metric: took 20.597208139s to restartPrimaryControlPlane
	I1108 09:33:16.186018   41684 kubeadm.go:403] duration metric: took 20.911122803s to StartCluster
	I1108 09:33:16.186038   41684 settings.go:142] acquiring lock: {Name:mk0d0617389eeb9d724259ab95a170c08eef0474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:16.186133   41684 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:33:16.187851   41684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:16.188169   41684 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:16.188441   41684 config.go:182] Loaded profile config "pause-022459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:16.188522   41684 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:33:16.189521   41684 out.go:179] * Verifying Kubernetes components...
	I1108 09:33:16.190054   41684 out.go:179] * Enabled addons: 
	W1108 09:33:15.361732   41318 node_ready.go:57] node "calico-615410" has "Ready":"False" status (will retry)
	W1108 09:33:17.364305   41318 node_ready.go:57] node "calico-615410" has "Ready":"False" status (will retry)
	I1108 09:33:18.358634   41318 node_ready.go:49] node "calico-615410" is "Ready"
	I1108 09:33:18.358679   41318 node_ready.go:38] duration metric: took 7.007720021s for node "calico-615410" to be "Ready" ...
	I1108 09:33:18.358698   41318 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:33:18.358819   41318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:18.415950   41318 api_server.go:72] duration metric: took 8.301323765s to wait for apiserver process to appear ...
	I1108 09:33:18.415983   41318 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:33:18.416015   41318 api_server.go:253] Checking apiserver healthz at https://192.168.83.75:8443/healthz ...
	I1108 09:33:18.427665   41318 api_server.go:279] https://192.168.83.75:8443/healthz returned 200:
	ok
	I1108 09:33:18.431912   41318 api_server.go:141] control plane version: v1.34.1
	I1108 09:33:18.431944   41318 api_server.go:131] duration metric: took 15.952314ms to wait for apiserver health ...
	I1108 09:33:18.431956   41318 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:33:18.438153   41318 system_pods.go:59] 9 kube-system pods found
	I1108 09:33:18.438194   41318 system_pods.go:61] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:18.438209   41318 system_pods.go:61] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:18.438221   41318 system_pods.go:61] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:18.438237   41318 system_pods.go:61] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:18.438251   41318 system_pods.go:61] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:18.438257   41318 system_pods.go:61] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:18.438263   41318 system_pods.go:61] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:18.438281   41318 system_pods.go:61] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:18.438301   41318 system_pods.go:61] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:18.438309   41318 system_pods.go:74] duration metric: took 6.346153ms to wait for pod list to return data ...
	I1108 09:33:18.438319   41318 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:33:18.446457   41318 default_sa.go:45] found service account: "default"
	I1108 09:33:18.446480   41318 default_sa.go:55] duration metric: took 8.149799ms for default service account to be created ...
	I1108 09:33:18.446490   41318 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:33:18.452690   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:18.452729   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:18.452741   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:18.452749   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:18.452754   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:18.452760   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:18.452767   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:18.452773   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:18.452783   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:18.452791   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:18.452826   41318 retry.go:31] will retry after 243.178969ms: missing components: kube-dns
	I1108 09:33:18.704420   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:18.704465   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:18.704480   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:18.704490   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:18.704522   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:18.704530   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:18.704537   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:18.704556   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:18.704561   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:18.704581   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:18.704606   41318 retry.go:31] will retry after 299.397948ms: missing components: kube-dns
	I1108 09:33:16.190703   41684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:33:16.191227   41684 addons.go:515] duration metric: took 2.730403ms for enable addons: enabled=[]
	I1108 09:33:16.517809   41684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:33:16.545401   41684 node_ready.go:35] waiting up to 6m0s for node "pause-022459" to be "Ready" ...
	I1108 09:33:16.550404   41684 node_ready.go:49] node "pause-022459" is "Ready"
	I1108 09:33:16.550438   41684 node_ready.go:38] duration metric: took 4.977123ms for node "pause-022459" to be "Ready" ...
	I1108 09:33:16.550453   41684 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:33:16.550528   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:16.581733   41684 api_server.go:72] duration metric: took 393.532938ms to wait for apiserver process to appear ...
	I1108 09:33:16.581762   41684 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:33:16.581778   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:16.589647   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 200:
	ok
	I1108 09:33:16.591292   41684 api_server.go:141] control plane version: v1.34.1
	I1108 09:33:16.591317   41684 api_server.go:131] duration metric: took 9.548627ms to wait for apiserver health ...
	I1108 09:33:16.591328   41684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:33:16.595836   41684 system_pods.go:59] 6 kube-system pods found
	I1108 09:33:16.595866   41684 system_pods.go:61] "coredns-66bc5c9577-bljvk" [ba662ec9-4f89-4b75-ad34-27e5fe5bba61] Running
	I1108 09:33:16.595877   41684 system_pods.go:61] "etcd-pause-022459" [6caec945-cd8e-4d36-9d98-e0346d82f48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:33:16.595886   41684 system_pods.go:61] "kube-apiserver-pause-022459" [952270ad-2035-4f93-b71e-8729e2ac93cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:33:16.595896   41684 system_pods.go:61] "kube-controller-manager-pause-022459" [7516e826-ffd2-41f4-8dec-13d28ac1fcf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:33:16.595903   41684 system_pods.go:61] "kube-proxy-jwkzf" [eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c] Running
	I1108 09:33:16.595912   41684 system_pods.go:61] "kube-scheduler-pause-022459" [e6a79821-2954-4e41-9cb9-64d610f8cd24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:33:16.595923   41684 system_pods.go:74] duration metric: took 4.587422ms to wait for pod list to return data ...
	I1108 09:33:16.595935   41684 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:33:16.600087   41684 default_sa.go:45] found service account: "default"
	I1108 09:33:16.600104   41684 default_sa.go:55] duration metric: took 4.164559ms for default service account to be created ...
	I1108 09:33:16.600112   41684 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:33:16.604053   41684 system_pods.go:86] 6 kube-system pods found
	I1108 09:33:16.604077   41684 system_pods.go:89] "coredns-66bc5c9577-bljvk" [ba662ec9-4f89-4b75-ad34-27e5fe5bba61] Running
	I1108 09:33:16.604088   41684 system_pods.go:89] "etcd-pause-022459" [6caec945-cd8e-4d36-9d98-e0346d82f48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:33:16.604098   41684 system_pods.go:89] "kube-apiserver-pause-022459" [952270ad-2035-4f93-b71e-8729e2ac93cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:33:16.604109   41684 system_pods.go:89] "kube-controller-manager-pause-022459" [7516e826-ffd2-41f4-8dec-13d28ac1fcf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:33:16.604118   41684 system_pods.go:89] "kube-proxy-jwkzf" [eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c] Running
	I1108 09:33:16.604126   41684 system_pods.go:89] "kube-scheduler-pause-022459" [e6a79821-2954-4e41-9cb9-64d610f8cd24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:33:16.604134   41684 system_pods.go:126] duration metric: took 4.017421ms to wait for k8s-apps to be running ...
	I1108 09:33:16.604146   41684 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:33:16.604197   41684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:33:16.627887   41684 system_svc.go:56] duration metric: took 23.730186ms WaitForService to wait for kubelet
	I1108 09:33:16.627921   41684 kubeadm.go:587] duration metric: took 439.721784ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:33:16.627942   41684 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:33:16.632609   41684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 09:33:16.632640   41684 node_conditions.go:123] node cpu capacity is 2
	I1108 09:33:16.632655   41684 node_conditions.go:105] duration metric: took 4.70671ms to run NodePressure ...
	I1108 09:33:16.632672   41684 start.go:242] waiting for startup goroutines ...
	I1108 09:33:16.632683   41684 start.go:247] waiting for cluster config update ...
	I1108 09:33:16.632707   41684 start.go:256] writing updated cluster config ...
	I1108 09:33:16.702104   41684 ssh_runner.go:195] Run: rm -f paused
	I1108 09:33:16.708550   41684 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:33:16.709327   41684 kapi.go:59] client config for pause-022459: &rest.Config{Host:"https://192.168.39.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:33:16.714352   41684 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bljvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:16.723675   41684 pod_ready.go:94] pod "coredns-66bc5c9577-bljvk" is "Ready"
	I1108 09:33:16.723709   41684 pod_ready.go:86] duration metric: took 9.335578ms for pod "coredns-66bc5c9577-bljvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:16.727453   41684 pod_ready.go:83] waiting for pod "etcd-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:33:18.735526   41684 pod_ready.go:104] pod "etcd-pause-022459" is not "Ready", error: <nil>
	W1108 09:33:20.735670   41684 pod_ready.go:104] pod "etcd-pause-022459" is not "Ready", error: <nil>
	I1108 09:33:16.223266   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:16.224015   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:16.224032   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:16.224523   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:16.224565   42540 retry.go:31] will retry after 2.770031139s: waiting for domain to come up
	I1108 09:33:18.995813   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:18.996562   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:18.996584   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:18.996965   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:18.996999   42540 retry.go:31] will retry after 2.632439756s: waiting for domain to come up
	I1108 09:33:19.009672   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:19.009710   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:19.009734   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:19.009751   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:19.009761   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:19.009773   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:19.009782   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:19.009791   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:19.009801   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:19.009811   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:19.009835   41318 retry.go:31] will retry after 365.252388ms: missing components: kube-dns
	I1108 09:33:19.381387   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:19.381427   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:19.381446   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:19.381456   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:19.381466   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:19.381473   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:19.381480   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:19.381486   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:19.381519   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:19.381530   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:19.381552   41318 retry.go:31] will retry after 505.776445ms: missing components: kube-dns
	I1108 09:33:19.891960   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:19.891999   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:19.892017   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:19.892027   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:19.892033   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:19.892045   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:19.892057   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:19.892068   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:19.892073   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:19.892080   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:19.892103   41318 retry.go:31] will retry after 628.071399ms: missing components: kube-dns
	I1108 09:33:20.526230   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:20.526274   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:20.526286   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:20.526296   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:20.526307   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:20.526314   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:20.526319   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:20.526328   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:20.526333   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:20.526342   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:20.526360   41318 retry.go:31] will retry after 904.625149ms: missing components: kube-dns
	I1108 09:33:21.437696   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:21.437744   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:21.437757   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:21.437767   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:21.437777   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:21.437785   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:21.437790   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:21.437796   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:21.437806   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:21.437811   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:21.437827   41318 retry.go:31] will retry after 792.380889ms: missing components: kube-dns
	I1108 09:33:22.237481   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:22.237537   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:22.237553   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:22.237563   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:22.237572   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:22.237580   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:22.237585   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:22.237594   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:22.237602   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:22.237607   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:22.237625   41318 retry.go:31] will retry after 1.218879985s: missing components: kube-dns
	I1108 09:33:23.466488   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:23.466538   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:23.466549   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:23.466555   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:23.466560   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:23.466564   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:23.466567   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:23.466571   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:23.466575   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:23.466578   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:23.466592   41318 retry.go:31] will retry after 1.329928251s: missing components: kube-dns
	I1108 09:33:22.239587   41684 pod_ready.go:94] pod "etcd-pause-022459" is "Ready"
	I1108 09:33:22.239613   41684 pod_ready.go:86] duration metric: took 5.512129431s for pod "etcd-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:22.242115   41684 pod_ready.go:83] waiting for pod "kube-apiserver-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:33:24.252833   41684 pod_ready.go:104] pod "kube-apiserver-pause-022459" is not "Ready", error: <nil>
	I1108 09:33:21.630805   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:21.631633   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:21.631657   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:21.632151   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:21.632195   42540 retry.go:31] will retry after 2.789068555s: waiting for domain to come up
	I1108 09:33:24.422649   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.423348   42540 main.go:143] libmachine: domain custom-flannel-615410 has current primary IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.423364   42540 main.go:143] libmachine: found domain IP: 192.168.72.152
	I1108 09:33:24.423372   42540 main.go:143] libmachine: reserving static IP address...
	I1108 09:33:24.423883   42540 main.go:143] libmachine: unable to find host DHCP lease matching {name: "custom-flannel-615410", mac: "52:54:00:a1:36:61", ip: "192.168.72.152"} in network mk-custom-flannel-615410
	I1108 09:33:24.679546   42540 main.go:143] libmachine: reserved static IP address 192.168.72.152 for domain custom-flannel-615410
	I1108 09:33:24.679576   42540 main.go:143] libmachine: waiting for SSH...
	I1108 09:33:24.679609   42540 main.go:143] libmachine: Getting to WaitForSSH function...
	I1108 09:33:24.683049   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.683622   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:24.683663   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.683942   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:24.684258   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:24.684276   42540 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1108 09:33:24.793656   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:33:24.794169   42540 main.go:143] libmachine: domain creation complete
	I1108 09:33:24.796091   42540 machine.go:94] provisionDockerMachine start ...
	I1108 09:33:24.799090   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.799644   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:24.799673   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.799865   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:24.800139   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:24.800153   42540 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:33:24.915766   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1108 09:33:24.915794   42540 buildroot.go:166] provisioning hostname "custom-flannel-615410"
	I1108 09:33:24.918934   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.919428   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:24.919457   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.919689   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:24.919960   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:24.919982   42540 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-615410 && echo "custom-flannel-615410" | sudo tee /etc/hostname
	I1108 09:33:25.050485   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-615410
	
	I1108 09:33:25.054118   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.054671   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.054715   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.054903   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:25.055166   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:25.055183   42540 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-615410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-615410/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-615410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:33:25.184716   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:33:25.184754   42540 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5845/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5845/.minikube}
	I1108 09:33:25.184797   42540 buildroot.go:174] setting up certificates
	I1108 09:33:25.184811   42540 provision.go:84] configureAuth start
	I1108 09:33:25.188581   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.189160   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.189202   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.192448   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.192975   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.193020   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.193186   42540 provision.go:143] copyHostCerts
	I1108 09:33:25.193247   42540 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem, removing ...
	I1108 09:33:25.193268   42540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem
	I1108 09:33:25.193354   42540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem (1082 bytes)
	I1108 09:33:25.193479   42540 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem, removing ...
	I1108 09:33:25.193514   42540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem
	I1108 09:33:25.193571   42540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem (1123 bytes)
	I1108 09:33:25.193677   42540 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem, removing ...
	I1108 09:33:25.193690   42540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem
	I1108 09:33:25.193731   42540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem (1675 bytes)
	I1108 09:33:25.193809   42540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-615410 san=[127.0.0.1 192.168.72.152 custom-flannel-615410 localhost minikube]
	I1108 09:33:25.294409   42540 provision.go:177] copyRemoteCerts
	I1108 09:33:25.294464   42540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:33:25.297196   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.297608   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.297636   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.297760   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:25.386113   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:33:25.422609   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 09:33:25.453103   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:33:25.482876   42540 provision.go:87] duration metric: took 298.051404ms to configureAuth
	I1108 09:33:25.482907   42540 buildroot.go:189] setting minikube options for container-runtime
	I1108 09:33:25.483096   42540 config.go:182] Loaded profile config "custom-flannel-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:25.486073   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.486581   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.486612   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.486800   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:25.487042   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:25.487058   42540 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:33:25.734536   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:33:25.734579   42540 machine.go:97] duration metric: took 938.467968ms to provisionDockerMachine
	I1108 09:33:25.734590   42540 client.go:176] duration metric: took 19.609653202s to LocalClient.Create
	I1108 09:33:25.734609   42540 start.go:167] duration metric: took 19.609719823s to libmachine.API.Create "custom-flannel-615410"
	I1108 09:33:25.734617   42540 start.go:293] postStartSetup for "custom-flannel-615410" (driver="kvm2")
	I1108 09:33:25.734629   42540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:33:25.734692   42540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:33:25.738027   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.738459   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.738483   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.738681   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:25.834810   42540 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:33:25.841430   42540 info.go:137] Remote host: Buildroot 2025.02
	I1108 09:33:25.841467   42540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/addons for local assets ...
	I1108 09:33:25.841555   42540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/files for local assets ...
	I1108 09:33:25.841665   42540 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem -> 97452.pem in /etc/ssl/certs
	I1108 09:33:25.841806   42540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:33:25.857215   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem --> /etc/ssl/certs/97452.pem (1708 bytes)
	I1108 09:33:25.891370   42540 start.go:296] duration metric: took 156.736127ms for postStartSetup
	I1108 09:33:25.895020   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.895571   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.895604   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.895902   42540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/config.json ...
	I1108 09:33:25.896131   42540 start.go:128] duration metric: took 19.773030774s to createHost
	I1108 09:33:25.898688   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.899066   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.899093   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.899282   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:25.899591   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:25.899605   42540 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1108 09:33:26.014068   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762594405.959814915
	
	I1108 09:33:26.014095   42540 fix.go:216] guest clock: 1762594405.959814915
	I1108 09:33:26.014104   42540 fix.go:229] Guest: 2025-11-08 09:33:25.959814915 +0000 UTC Remote: 2025-11-08 09:33:25.896144199 +0000 UTC m=+19.898782632 (delta=63.670716ms)
	I1108 09:33:26.014119   42540 fix.go:200] guest clock delta is within tolerance: 63.670716ms
	I1108 09:33:26.014123   42540 start.go:83] releasing machines lock for "custom-flannel-615410", held for 19.891121217s
	I1108 09:33:26.017247   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.017705   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:26.017729   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.018351   42540 ssh_runner.go:195] Run: cat /version.json
	I1108 09:33:26.018371   42540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:33:26.021715   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.021719   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.022304   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:26.022324   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:26.022343   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.022354   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.022532   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:26.022708   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:26.134644   42540 ssh_runner.go:195] Run: systemctl --version
	I1108 09:33:26.141626   42540 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:33:26.317052   42540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:33:26.325621   42540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:33:26.325680   42540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:33:26.350578   42540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:33:26.350606   42540 start.go:496] detecting cgroup driver to use...
	I1108 09:33:26.350680   42540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:33:26.374561   42540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:33:26.395607   42540 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:33:26.395693   42540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:33:26.416857   42540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:33:26.437078   42540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:33:26.613155   42540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:33:26.849240   42540 docker.go:234] disabling docker service ...
	I1108 09:33:26.849323   42540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:33:26.866404   42540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:33:26.882838   42540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:33:27.053946   42540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:33:27.241767   42540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:33:27.262643   42540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:33:27.296425   42540 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:33:27.296515   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.314293   42540 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:33:27.314372   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.328455   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.342952   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.356376   42540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:33:27.371255   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.385620   42540 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.411492   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.427044   42540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:33:27.439724   42540 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 09:33:27.439793   42540 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 09:33:27.464863   42540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:33:27.479288   42540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:33:27.645128   42540 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:33:28.090350   42540 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:33:28.090414   42540 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:33:28.098897   42540 start.go:564] Will wait 60s for crictl version
	I1108 09:33:28.098953   42540 ssh_runner.go:195] Run: which crictl
	I1108 09:33:28.103522   42540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 09:33:28.156915   42540 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1108 09:33:28.156998   42540 ssh_runner.go:195] Run: crio --version
	I1108 09:33:28.202920   42540 ssh_runner.go:195] Run: crio --version
	I1108 09:33:28.239674   42540 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	W1108 09:33:26.253103   41684 pod_ready.go:104] pod "kube-apiserver-pause-022459" is not "Ready", error: <nil>
	I1108 09:33:28.253121   41684 pod_ready.go:94] pod "kube-apiserver-pause-022459" is "Ready"
	I1108 09:33:28.253147   41684 pod_ready.go:86] duration metric: took 6.011010476s for pod "kube-apiserver-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.255688   41684 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.269480   41684 pod_ready.go:94] pod "kube-controller-manager-pause-022459" is "Ready"
	I1108 09:33:28.269528   41684 pod_ready.go:86] duration metric: took 13.811809ms for pod "kube-controller-manager-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.273068   41684 pod_ready.go:83] waiting for pod "kube-proxy-jwkzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.280191   41684 pod_ready.go:94] pod "kube-proxy-jwkzf" is "Ready"
	I1108 09:33:28.280214   41684 pod_ready.go:86] duration metric: took 7.122426ms for pod "kube-proxy-jwkzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.283256   41684 pod_ready.go:83] waiting for pod "kube-scheduler-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.446415   41684 pod_ready.go:94] pod "kube-scheduler-pause-022459" is "Ready"
	I1108 09:33:28.446452   41684 pod_ready.go:86] duration metric: took 163.170946ms for pod "kube-scheduler-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.446468   41684 pod_ready.go:40] duration metric: took 11.737884625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:33:28.497095   41684 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:33:28.498640   41684 out.go:179] * Done! kubectl is now configured to use "pause-022459" cluster and "default" namespace by default
	I1108 09:33:24.809221   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:24.809250   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:24.809261   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:24.809270   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:24.809276   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:24.809282   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:24.809287   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:24.809292   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:24.809297   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:24.809308   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:24.809325   41318 retry.go:31] will retry after 1.864369s: missing components: kube-dns
	I1108 09:33:26.680224   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:26.680264   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:26.680276   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:26.680286   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:26.680298   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:26.680305   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:26.680313   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:26.680323   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:26.680333   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:26.680337   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:26.680352   41318 retry.go:31] will retry after 2.869651595s: missing components: kube-dns
	
	
	==> CRI-O <==
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.434866431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd02f777-151f-44fe-9a82-b084c22453d9 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.440026588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b391f2b5-5fd0-41e5-8e74-cbfaf642449b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.440849741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594409440808616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b391f2b5-5fd0-41e5-8e74-cbfaf642449b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.442021592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee1a6e3c-07b3-45cd-9cdf-61dc6a8f917f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.442141711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee1a6e3c-07b3-45cd-9cdf-61dc6a8f917f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.442817630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee1a6e3c-07b3-45cd-9cdf-61dc6a8f917f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.524668151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0bed146-e511-44e8-b8db-b41399976e6c name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.524976402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0bed146-e511-44e8-b8db-b41399976e6c name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.527653311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31518e16-6d5f-4b05-a76d-f1efdc33aeaa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.528759303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594409528645239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31518e16-6d5f-4b05-a76d-f1efdc33aeaa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.530023061Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4288205b-2cd6-4bde-9c4d-98a1eabda594 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.530225486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4288205b-2cd6-4bde-9c4d-98a1eabda594 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.530653870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4288205b-2cd6-4bde-9c4d-98a1eabda594 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.552546519Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a88ac736-2212-4400-9d1f-6b3a41e4bfcc name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.552967253Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-bljvk,Uid:ba662ec9-4f89-4b75-ad34-27e5fe5bba61,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1762594374221929304,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-08T09:31:59.894519979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-022459,Uid:8aac9aa9657cbe0ee0c163fb07b3bfb9,Namespace:kub
e-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1762594374154546214,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8aac9aa9657cbe0ee0c163fb07b3bfb9,kubernetes.io/config.seen: 2025-11-08T09:31:54.477777540Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-022459,Uid:0e6f0907d7e974011d92c91aa0853cd5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1762594374136634308,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e97
4011d92c91aa0853cd5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.96:8443,kubernetes.io/config.hash: 0e6f0907d7e974011d92c91aa0853cd5,kubernetes.io/config.seen: 2025-11-08T09:31:54.477776332Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&PodSandboxMetadata{Name:etcd-pause-022459,Uid:8ddaec152f6ad705ccc80ffd0d36362e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1762594374085564855,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.96:2379,kubernetes.io/config.hash: 8ddaec152f6ad705ccc80ffd0d36362e,kubernetes.io/config.seen: 2025-11-08T09
:31:54.477772028Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-022459,Uid:6def352d64f30b28eafe8d23008c1c9f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1762594374062179584,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6def352d64f30b28eafe8d23008c1c9f,kubernetes.io/config.seen: 2025-11-08T09:31:54.477778604Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&PodSandboxMetadata{Name:kube-proxy-jwkzf,Uid:eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Creat
edAt:1762594374043210796,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-08T09:31:59.596028639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-bljvk,Uid:ba662ec9-4f89-4b75-ad34-27e5fe5bba61,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1762594320256032370,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2025-11-08T09:31:59.894519979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a88ac736-2212-4400-9d1f-6b3a41e4bfcc name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.555907487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01ad64e3-5772-4225-bd2b-f359de575a8e name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.555996989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01ad64e3-5772-4225-bd2b-f359de575a8e name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.556499604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01ad64e3-5772-4225-bd2b-f359de575a8e name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.610703849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd9632a2-6900-4688-b5b9-3b9d70981706 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.610922006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd9632a2-6900-4688-b5b9-3b9d70981706 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.614138383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97b3c6bb-61e2-4d04-a77d-9411d838a66c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.615106991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594409615067373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97b3c6bb-61e2-4d04-a77d-9411d838a66c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.616228163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f475926e-47d2-4359-8d12-6f6f9d50c2d3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.616314567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f475926e-47d2-4359-8d12-6f6f9d50c2d3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:29 pause-022459 crio[2798]: time="2025-11-08 09:33:29.616806289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f475926e-47d2-4359-8d12-6f6f9d50c2d3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	176ce5f957e27       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   15 seconds ago       Running             kube-proxy                2                   e9c1d81364077       kube-proxy-jwkzf
	7fc2093d83a09       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   19 seconds ago       Running             kube-scheduler            2                   bd430f93a4202       kube-scheduler-pause-022459
	835741b73bd96       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   19 seconds ago       Running             etcd                      2                   9177ade588811       etcd-pause-022459
	ea2d4c0331e40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   19 seconds ago       Running             kube-apiserver            2                   1ca6ab3b485aa       kube-apiserver-pause-022459
	a72f15a407001       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   19 seconds ago       Running             kube-controller-manager   2                   cd87e1c46f3be       kube-controller-manager-pause-022459
	031e9527c6baf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   33 seconds ago       Running             coredns                   1                   67a5c2d4abb3f       coredns-66bc5c9577-bljvk
	4fcc16ab58f2c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago       Exited              kube-controller-manager   1                   cd87e1c46f3be       kube-controller-manager-pause-022459
	356af4bb0b055       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   34 seconds ago       Exited              kube-proxy                1                   e9c1d81364077       kube-proxy-jwkzf
	885714e5108a3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago       Exited              kube-apiserver            1                   1ca6ab3b485aa       kube-apiserver-pause-022459
	4e3c3d77a2027       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago       Exited              etcd                      1                   9177ade588811       etcd-pause-022459
	b7d2ad19411e4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago       Exited              kube-scheduler            1                   bd430f93a4202       kube-scheduler-pause-022459
	e6f2b9508a47c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   22386cb0c7c04       coredns-66bc5c9577-bljvk
	
	
	==> coredns [031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33398 - 44078 "HINFO IN 8167381204853300725.4431295867302043852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027160886s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36395 - 45747 "HINFO IN 6104717662577509140.8260565522814595785. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026327627s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-022459
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-022459
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=pause-022459
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_31_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:31:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-022459
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:33:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    pause-022459
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 25392fadbadd4ddd9e9cb8e77016aa89
	  System UUID:                25392fad-badd-4ddd-9e9c-b8e77016aa89
	  Boot ID:                    f0eb18d2-bb38-44f5-8e6f-f7348eb6731d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bljvk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     91s
	  kube-system                 etcd-pause-022459                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         97s
	  kube-system                 kube-apiserver-pause-022459             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-pause-022459    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-jwkzf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-pause-022459             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  102s (x8 over 103s)  kubelet          Node pause-022459 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 103s)  kubelet          Node pause-022459 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 103s)  kubelet          Node pause-022459 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node pause-022459 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node pause-022459 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node pause-022459 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeReady                95s                  kubelet          Node pause-022459 status is now: NodeReady
	  Normal  RegisteredNode           92s                  node-controller  Node pause-022459 event: Registered Node pause-022459 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node pause-022459 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node pause-022459 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node pause-022459 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                  node-controller  Node pause-022459 event: Registered Node pause-022459 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001483] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007486] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.173693] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.111838] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.091305] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.186817] kauditd_printk_skb: 171 callbacks suppressed
	[Nov 8 09:32] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.598394] kauditd_printk_skb: 219 callbacks suppressed
	[ +22.229738] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 8 09:33] kauditd_printk_skb: 321 callbacks suppressed
	[  +5.593301] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.549085] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6] <==
	{"level":"warn","ts":"2025-11-08T09:32:57.077289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.091990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.108561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.128754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.144319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.161249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.269543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:33:06.233996Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T09:33:06.234072Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-022459","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	{"level":"error","ts":"2025-11-08T09:33:06.234151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:33:06.238824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:33:06.238929Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:33:06.238959Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4b4d4eeb3ae7df8","current-leader-member-id":"d4b4d4eeb3ae7df8"}
	{"level":"info","ts":"2025-11-08T09:33:06.239057Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-08T09:33:06.239071Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239717Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239754Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:33:06.239761Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.96:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239840Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239852Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:33:06.239857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:33:06.243381Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"error","ts":"2025-11-08T09:33:06.243498Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.96:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:33:06.243532Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-11-08T09:33:06.243542Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-022459","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	
	
	==> etcd [835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3] <==
	{"level":"warn","ts":"2025-11-08T09:33:12.274389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.292291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.304644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.317831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.332603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.341719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.356698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.368571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.377907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.387099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.400151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.412233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.433255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.448190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.462323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.477336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.488506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.500530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.609517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33512","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:33:16.035301Z","caller":"traceutil/trace.go:172","msg":"trace[404446499] linearizableReadLoop","detail":"{readStateIndex:557; appliedIndex:557; }","duration":"315.634815ms","start":"2025-11-08T09:33:15.719641Z","end":"2025-11-08T09:33:16.035276Z","steps":["trace[404446499] 'read index received'  (duration: 315.628328ms)","trace[404446499] 'applied index is now lower than readState.Index'  (duration: 5.589µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:33:16.126737Z","caller":"traceutil/trace.go:172","msg":"trace[1372632036] transaction","detail":"{read_only:false; number_of_response:0; response_revision:510; }","duration":"409.925155ms","start":"2025-11-08T09:33:15.716796Z","end":"2025-11-08T09:33:16.126721Z","steps":["trace[1372632036] 'process raft request'  (duration: 318.50919ms)","trace[1372632036] 'compare'  (duration: 91.337976ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:33:16.126843Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"407.126573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-08T09:33:16.126931Z","caller":"traceutil/trace.go:172","msg":"trace[1231036259] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:510; }","duration":"407.281777ms","start":"2025-11-08T09:33:15.719638Z","end":"2025-11-08T09:33:16.126919Z","steps":["trace[1231036259] 'agreement among raft nodes before linearized reading'  (duration: 315.751717ms)","trace[1231036259] 'range keys from in-memory index tree'  (duration: 91.296726ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:33:16.126964Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:33:15.719625Z","time spent":"407.326145ms","remote":"127.0.0.1:60978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-08T09:33:16.127204Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:33:15.716775Z","time spent":"410.006099ms","remote":"127.0.0.1:33008","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":29,"request content":"compare:<target:MOD key:\"/registry/rolebindings/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/rolebindings/kube-system/kube-proxy\" value_size:382 >> failure:<>"}
	
	
	==> kernel <==
	 09:33:30 up 2 min,  0 users,  load average: 1.04, 0.62, 0.25
	Linux pause-022459 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70] <==
	{"level":"warn","ts":"2025-11-08T09:33:00.515191Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.541713Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.565112Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.590104Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.617512Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.644570Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.671158Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.696836Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.720185Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.744665Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.768388Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.791980Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E1108 09:33:00.792100       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	E1108 09:33:00.937199       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:00.939044       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:01.936943       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:01.938758       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:02.937281       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:02.939163       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:03.937161       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:03.938723       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:04.936918       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:04.939808       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:05.936663       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:05.939409       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a] <==
	I1108 09:33:13.538066       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:33:13.538074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:33:13.582004       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:33:13.602938       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:33:13.603046       1 policy_source.go:240] refreshing policies
	I1108 09:33:13.608161       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:33:13.613186       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:33:13.615083       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:33:13.616725       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:33:13.619659       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:33:13.619739       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:33:13.621842       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:33:13.623734       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:33:13.629842       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:33:13.638141       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:33:14.210900       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:33:14.436071       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1108 09:33:15.159862       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.96]
	I1108 09:33:15.163886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:33:15.174728       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:33:15.552559       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:33:15.637753       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:33:15.701238       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:33:15.715302       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:33:21.862774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c] <==
	
	
	==> kube-controller-manager [a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c] <==
	I1108 09:33:16.949250       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:33:16.953505       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:33:16.956208       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:33:16.958748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:33:16.961217       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:33:16.964005       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:33:16.964077       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:33:16.964152       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:33:16.964187       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:33:16.964196       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:33:16.965555       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:33:16.965926       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:33:16.966051       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:33:16.968915       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:33:16.969064       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:33:16.970462       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:33:16.972873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:33:16.972910       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:33:16.972921       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:33:16.978885       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:33:16.979570       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:33:16.982824       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:33:16.988527       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:33:17.000001       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:33:17.000202       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	
	
	==> kube-proxy [176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced] <==
	I1108 09:33:14.788348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:33:14.889641       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:33:14.889700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.96"]
	E1108 09:33:14.889819       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:33:14.991186       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1108 09:33:14.991276       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 09:33:14.991308       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:33:15.016946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:33:15.017686       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:33:15.017759       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:33:15.030512       1 config.go:200] "Starting service config controller"
	I1108 09:33:15.030990       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:33:15.031267       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:33:15.031303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:33:15.031322       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:33:15.031328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:33:15.031859       1 config.go:309] "Starting node config controller"
	I1108 09:33:15.031950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:33:15.131848       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:33:15.131885       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:33:15.131923       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:33:15.132197       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987] <==
	
	
	==> kube-scheduler [7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059] <==
	I1108 09:33:12.169967       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:33:13.544662       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:33:13.544753       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:33:13.544784       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:33:13.546493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:33:13.581871       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:33:13.582180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:33:13.585380       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:13.585486       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:13.587680       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:33:13.587771       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:33:13.686566       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d] <==
	E1108 09:33:02.229247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:33:02.329214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.96:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:33:02.569628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.96:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:33:02.638350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:33:02.736008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.96:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:33:02.877747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.96:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:33:03.103172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.96:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:33:03.111981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.96:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:33:03.335777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.96:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:33:03.371770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:33:03.440097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:33:03.520284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.96:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:33:03.645040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.96:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:33:03.757073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:33:05.467102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.96:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:33:05.675655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.96:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:33:06.002230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.96:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:33:06.386322       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1108 09:33:06.386713       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1108 09:33:06.387203       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:06.387253       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:06.387540       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 09:33:06.387988       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 09:33:06.388523       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 09:33:06.388850       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 08 09:33:12 pause-022459 kubelet[3818]: E1108 09:33:12.443903    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:12 pause-022459 kubelet[3818]: E1108 09:33:12.444155    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.448871    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.449315    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.498108    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.648575    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-022459\" already exists" pod="kube-system/kube-controller-manager-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.648633    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.669597    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-022459\" already exists" pod="kube-system/kube-scheduler-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.669810    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.671096    3818 kubelet_node_status.go:124] "Node was previously registered" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.671206    3818 kubelet_node_status.go:78] "Successfully registered node" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.671239    3818 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.673032    3818 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.697921    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-022459\" already exists" pod="kube-system/etcd-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.697952    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.716023    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022459\" already exists" pod="kube-system/kube-apiserver-pause-022459"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.073749    3818 apiserver.go:52] "Watching apiserver"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.117924    3818 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.202968    3818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c-lib-modules\") pod \"kube-proxy-jwkzf\" (UID: \"eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c\") " pod="kube-system/kube-proxy-jwkzf"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.203056    3818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c-xtables-lock\") pod \"kube-proxy-jwkzf\" (UID: \"eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c\") " pod="kube-system/kube-proxy-jwkzf"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.387085    3818 scope.go:117] "RemoveContainer" containerID="356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987"
	Nov 08 09:33:19 pause-022459 kubelet[3818]: E1108 09:33:19.300744    3818 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762594399299774726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 08 09:33:19 pause-022459 kubelet[3818]: E1108 09:33:19.300800    3818 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762594399299774726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 08 09:33:29 pause-022459 kubelet[3818]: E1108 09:33:29.305329    3818 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762594409303676529  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 08 09:33:29 pause-022459 kubelet[3818]: E1108 09:33:29.305485    3818 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762594409303676529  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022459 -n pause-022459
helpers_test.go:269: (dbg) Run:  kubectl --context pause-022459 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-022459 -n pause-022459
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-022459 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-022459 logs -n 25: (3.724871382s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                               ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-615410 sudo systemctl cat docker --no-pager                                                                                                              │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/docker/daemon.json                                                                                                                  │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo docker system info                                                                                                                           │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl cat cri-docker --no-pager                                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cri-dockerd --version                                                                                                                        │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl status containerd --all --full --no-pager                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p auto-615410 sudo systemctl cat containerd --no-pager                                                                                                          │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo cat /etc/containerd/config.toml                                                                                                              │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo containerd config dump                                                                                                                       │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl status crio --all --full --no-pager                                                                                                │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo systemctl cat crio --no-pager                                                                                                                │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p auto-615410 sudo crio config                                                                                                                                  │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p auto-615410                                                                                                                                                   │ auto-615410           │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ -p custom-flannel-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio │ custom-flannel-615410 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ ssh     │ -p kindnet-615410 pgrep -a kubelet                                                                                                                               │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p kindnet-615410 sudo cat /etc/nsswitch.conf                                                                                                                    │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p kindnet-615410 sudo cat /etc/hosts                                                                                                                            │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p kindnet-615410 sudo cat /etc/resolv.conf                                                                                                                      │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p kindnet-615410 sudo crictl pods                                                                                                                               │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ ssh     │ -p kindnet-615410 sudo crictl ps --all                                                                                                                           │ kindnet-615410        │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:06
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:06.059883   42540 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:06.060185   42540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:06.060196   42540 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:06.060202   42540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:06.060478   42540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:33:06.061186   42540 out.go:368] Setting JSON to false
	I1108 09:33:06.062389   42540 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4527,"bootTime":1762589859,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:33:06.062520   42540 start.go:143] virtualization: kvm guest
	I1108 09:33:06.064410   42540 out.go:179] * [custom-flannel-615410] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:33:06.065667   42540 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:33:06.065673   42540 notify.go:221] Checking for updates...
	I1108 09:33:06.066773   42540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:06.068297   42540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:33:06.069338   42540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:33:06.070441   42540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:33:06.071474   42540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:06.073118   42540 config.go:182] Loaded profile config "calico-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:06.073273   42540 config.go:182] Loaded profile config "guest-788314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 09:33:06.073393   42540 config.go:182] Loaded profile config "kindnet-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:06.073605   42540 config.go:182] Loaded profile config "pause-022459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:06.073718   42540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:06.117072   42540 out.go:179] * Using the kvm2 driver based on user configuration
	I1108 09:33:06.118135   42540 start.go:309] selected driver: kvm2
	I1108 09:33:06.118158   42540 start.go:930] validating driver "kvm2" against <nil>
	I1108 09:33:06.118173   42540 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:06.119253   42540 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:33:06.119622   42540 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:33:06.119666   42540 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 09:33:06.119684   42540 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1108 09:33:06.119744   42540 start.go:353] cluster config:
	{Name:custom-flannel-615410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-615410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:33:06.119897   42540 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:33:06.121379   42540 out.go:179] * Starting "custom-flannel-615410" primary control-plane node in "custom-flannel-615410" cluster
	I1108 09:33:06.122404   42540 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:06.122450   42540 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:33:06.122476   42540 cache.go:59] Caching tarball of preloaded images
	I1108 09:33:06.122598   42540 preload.go:233] Found /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:33:06.122613   42540 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:33:06.122733   42540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/config.json ...
	I1108 09:33:06.122758   42540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/config.json: {Name:mkd2913ce083f135dcd902d780686a82341b48f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:06.122935   42540 start.go:360] acquireMachinesLock for custom-flannel-615410: {Name:mk17d57b1ca3eb78588f74785db7bcd997a10966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 09:33:06.122991   42540 start.go:364] duration metric: took 32.271µs to acquireMachinesLock for "custom-flannel-615410"
	I1108 09:33:06.123019   42540 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-615410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-615410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:06.123088   42540 start.go:125] createHost starting for "" (driver="kvm2")
	I1108 09:33:04.492430   41318 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:33:04.492454   41318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1108 09:33:04.524908   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:33:06.444126   41318 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.91918695s)
	I1108 09:33:06.444176   41318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:33:06.444269   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:06.444285   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-615410 minikube.k8s.io/updated_at=2025_11_08T09_33_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=calico-615410 minikube.k8s.io/primary=true
	I1108 09:33:06.470577   41318 ops.go:34] apiserver oom_adj: -16
	I1108 09:33:06.659022   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:07.159957   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:07.659130   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:08.159199   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:08.659726   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:09.159355   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:09.659961   41318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:33:10.112491   41318 kubeadm.go:1114] duration metric: took 3.668289508s to wait for elevateKubeSystemPrivileges
	I1108 09:33:10.112550   41318 kubeadm.go:403] duration metric: took 18.858855921s to StartCluster
	I1108 09:33:10.112572   41318 settings.go:142] acquiring lock: {Name:mk0d0617389eeb9d724259ab95a170c08eef0474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:10.112658   41318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:33:10.114312   41318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:10.114587   41318 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.75 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:10.114780   41318 config.go:182] Loaded profile config "calico-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:10.114833   41318 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:33:10.114913   41318 addons.go:70] Setting storage-provisioner=true in profile "calico-615410"
	I1108 09:33:10.114941   41318 addons.go:239] Setting addon storage-provisioner=true in "calico-615410"
	I1108 09:33:10.114969   41318 host.go:66] Checking if "calico-615410" exists ...
	I1108 09:33:10.115715   41318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:33:10.115816   41318 addons.go:70] Setting default-storageclass=true in profile "calico-615410"
	I1108 09:33:10.115839   41318 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-615410"
	I1108 09:33:10.115990   41318 out.go:179] * Verifying Kubernetes components...
	I1108 09:33:10.116933   41318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:33:10.119161   41318 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:33:06.528714   41684 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c 356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987 885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70 4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6 b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba 9c1781af9d254003f57431a008dbd305d695c4a7b22b0394512a38a14f1626b0 866b2618f8007747a26d56b7a72550d44a773826497b64294318b5150163a926 862a4f78a966af1058bd7d1a3a1d9e673d07c78f7f425c8a6930f299a7a66d89 2a139fae3fa4ce1ab8e9e90adddcc72a81e6c1c18f69c7f78024ef18b33d9524: (10.82053907s)
	W1108 09:33:06.528795   41684 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c 356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987 885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70 4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6 b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba 9c1781af9d254003f57431a008dbd305d695c4a7b22b0394512a38a14f1626b0 866b2618f8007747a26d56b7a72550d44a773826497b64294318b5150163a926 862a4f78a966af1058bd7d1a3a1d9e673d07c78f7f425c8a6930f299a7a66d89 2a139fae3fa4ce1ab8e9e90adddcc72a81e6c1c18f69c7f78024ef18b33d9524: Process exited with status 1
	stdout:
	4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c
	356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987
	885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70
	4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6
	b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d
	
	stderr:
	E1108 09:33:06.520360    3569 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310\": container with ID starting with 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 not found: ID does not exist" containerID="91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310"
	time="2025-11-08T09:33:06Z" level=fatal msg="stopping the container \"91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310\": rpc error: code = NotFound desc = could not find container \"91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310\": container with ID starting with 91b7bd7642c321bbdbf420dc9aee36200f9ee0a30f6d2456b05cb6da396fd310 not found: ID does not exist"
	I1108 09:33:06.528858   41684 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 09:33:06.589746   41684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:33:06.608040   41684 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5623 Nov  8 09:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5641 Nov  8 09:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Nov  8 09:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5589 Nov  8 09:31 /etc/kubernetes/scheduler.conf
	
	I1108 09:33:06.608124   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:33:06.622856   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:33:06.635781   41684 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:33:06.635858   41684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:33:06.652359   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:33:06.669416   41684 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:33:06.669512   41684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:33:06.683991   41684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:33:06.696865   41684 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:33:06.696924   41684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:33:06.713075   41684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:33:06.725978   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:06.788603   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:08.522641   41684 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.73399969s)
	I1108 09:33:08.522723   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:08.947837   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:09.065234   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:09.179171   41684 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:33:09.179269   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:09.680247   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:10.179665   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:10.268169   41684 api_server.go:72] duration metric: took 1.089007176s to wait for apiserver process to appear ...
	I1108 09:33:10.268199   41684 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:33:10.268220   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:10.268775   41684 api_server.go:269] stopped: https://192.168.39.96:8443/healthz: Get "https://192.168.39.96:8443/healthz": dial tcp 192.168.39.96:8443: connect: connection refused
	I1108 09:33:10.770010   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:06.124626   42540 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1108 09:33:06.124890   42540 start.go:159] libmachine.API.Create for "custom-flannel-615410" (driver="kvm2")
	I1108 09:33:06.124927   42540 client.go:173] LocalClient.Create starting
	I1108 09:33:06.124996   42540 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem
	I1108 09:33:06.125043   42540 main.go:143] libmachine: Decoding PEM data...
	I1108 09:33:06.125065   42540 main.go:143] libmachine: Parsing certificate...
	I1108 09:33:06.125127   42540 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem
	I1108 09:33:06.125153   42540 main.go:143] libmachine: Decoding PEM data...
	I1108 09:33:06.125168   42540 main.go:143] libmachine: Parsing certificate...
	I1108 09:33:06.125624   42540 main.go:143] libmachine: creating domain...
	I1108 09:33:06.125646   42540 main.go:143] libmachine: creating network...
	I1108 09:33:06.127385   42540 main.go:143] libmachine: found existing default network
	I1108 09:33:06.127705   42540 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 09:33:06.128911   42540 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3e:18:b4} reservation:<nil>}
	I1108 09:33:06.129916   42540 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a2:80:b6} reservation:<nil>}
	I1108 09:33:06.130621   42540 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:14:52} reservation:<nil>}
	I1108 09:33:06.131859   42540 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d2a3e0}
	I1108 09:33:06.131955   42540 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-custom-flannel-615410</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 09:33:06.138619   42540 main.go:143] libmachine: creating private network mk-custom-flannel-615410 192.168.72.0/24...
	I1108 09:33:06.231047   42540 main.go:143] libmachine: private network mk-custom-flannel-615410 192.168.72.0/24 created
	I1108 09:33:06.231424   42540 main.go:143] libmachine: <network>
	  <name>mk-custom-flannel-615410</name>
	  <uuid>370a223f-7a96-4fb6-b0e9-86c1871fca6f</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:29:c0:41'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1108 09:33:06.231465   42540 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410 ...
	I1108 09:33:06.231615   42540 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1108 09:33:06.231629   42540 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:33:06.231730   42540 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21866-5845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1108 09:33:06.477074   42540 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa...
	I1108 09:33:06.561988   42540 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/custom-flannel-615410.rawdisk...
	I1108 09:33:06.562064   42540 main.go:143] libmachine: Writing magic tar header
	I1108 09:33:06.562093   42540 main.go:143] libmachine: Writing SSH key tar header
	I1108 09:33:06.562181   42540 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410 ...
	I1108 09:33:06.562305   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410
	I1108 09:33:06.562354   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410 (perms=drwx------)
	I1108 09:33:06.562375   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube/machines
	I1108 09:33:06.562389   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube/machines (perms=drwxr-xr-x)
	I1108 09:33:06.562404   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:33:06.562422   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845/.minikube (perms=drwxr-xr-x)
	I1108 09:33:06.562430   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21866-5845
	I1108 09:33:06.562438   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21866-5845 (perms=drwxrwxr-x)
	I1108 09:33:06.562448   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1108 09:33:06.562457   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1108 09:33:06.562463   42540 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1108 09:33:06.562473   42540 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1108 09:33:06.562482   42540 main.go:143] libmachine: checking permissions on dir: /home
	I1108 09:33:06.562491   42540 main.go:143] libmachine: skipping /home - not owner
	I1108 09:33:06.562508   42540 main.go:143] libmachine: defining domain...
	I1108 09:33:06.563873   42540 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>custom-flannel-615410</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/custom-flannel-615410.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-custom-flannel-615410'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1108 09:33:06.569099   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:d5:95:6f in network default
	I1108 09:33:06.569832   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:06.569849   42540 main.go:143] libmachine: starting domain...
	I1108 09:33:06.569853   42540 main.go:143] libmachine: ensuring networks are active...
	I1108 09:33:06.570677   42540 main.go:143] libmachine: Ensuring network default is active
	I1108 09:33:06.571033   42540 main.go:143] libmachine: Ensuring network mk-custom-flannel-615410 is active
	I1108 09:33:06.571665   42540 main.go:143] libmachine: getting domain XML...
	I1108 09:33:06.572896   42540 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>custom-flannel-615410</name>
	  <uuid>b92ee915-57b3-40ae-b0f3-23047055b527</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/custom-flannel-615410.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a1:36:61'/>
	      <source network='mk-custom-flannel-615410'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d5:95:6f'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1108 09:33:08.028531   42540 main.go:143] libmachine: waiting for domain to start...
	I1108 09:33:08.030073   42540 main.go:143] libmachine: domain is now running
	I1108 09:33:08.030096   42540 main.go:143] libmachine: waiting for IP...
	I1108 09:33:08.030852   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.031560   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.031575   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.031932   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.031970   42540 retry.go:31] will retry after 258.718043ms: waiting for domain to come up
	I1108 09:33:08.292647   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.293457   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.293485   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.293977   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.294021   42540 retry.go:31] will retry after 377.236405ms: waiting for domain to come up
	I1108 09:33:08.673581   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.674426   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.674447   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.674905   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.674947   42540 retry.go:31] will retry after 299.001423ms: waiting for domain to come up
	I1108 09:33:08.975748   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:08.976550   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:08.976569   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:08.977009   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:08.977066   42540 retry.go:31] will retry after 419.143ms: waiting for domain to come up
	I1108 09:33:09.397797   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:09.398674   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:09.398694   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:09.399153   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:09.399200   42540 retry.go:31] will retry after 523.040388ms: waiting for domain to come up
	I1108 09:33:09.924075   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:09.925050   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:09.925076   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:09.925518   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:09.925562   42540 retry.go:31] will retry after 686.008423ms: waiting for domain to come up
	I1108 09:33:10.613670   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:10.614410   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:10.614429   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:10.614901   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:10.614939   42540 retry.go:31] will retry after 1.11728343s: waiting for domain to come up
	I1108 09:33:10.120193   41318 addons.go:239] Setting addon default-storageclass=true in "calico-615410"
	I1108 09:33:10.120237   41318 host.go:66] Checking if "calico-615410" exists ...
	I1108 09:33:10.120262   41318 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:33:10.120279   41318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:33:10.123094   41318 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:33:10.123114   41318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:33:10.124795   41318 main.go:143] libmachine: domain calico-615410 has defined MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.125720   41318 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:e6:27", ip: ""} in network mk-calico-615410: {Iface:virbr5 ExpiryTime:2025-11-08 10:32:40 +0000 UTC Type:0 Mac:52:54:00:4a:e6:27 Iaid: IPaddr:192.168.83.75 Prefix:24 Hostname:calico-615410 Clientid:01:52:54:00:4a:e6:27}
	I1108 09:33:10.125758   41318 main.go:143] libmachine: domain calico-615410 has defined IP address 192.168.83.75 and MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.125968   41318 sshutil.go:53] new ssh client: &{IP:192.168.83.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/calico-615410/id_rsa Username:docker}
	I1108 09:33:10.126989   41318 main.go:143] libmachine: domain calico-615410 has defined MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.127518   41318 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:e6:27", ip: ""} in network mk-calico-615410: {Iface:virbr5 ExpiryTime:2025-11-08 10:32:40 +0000 UTC Type:0 Mac:52:54:00:4a:e6:27 Iaid: IPaddr:192.168.83.75 Prefix:24 Hostname:calico-615410 Clientid:01:52:54:00:4a:e6:27}
	I1108 09:33:10.127564   41318 main.go:143] libmachine: domain calico-615410 has defined IP address 192.168.83.75 and MAC address 52:54:00:4a:e6:27 in network mk-calico-615410
	I1108 09:33:10.127770   41318 sshutil.go:53] new ssh client: &{IP:192.168.83.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/calico-615410/id_rsa Username:docker}
	I1108 09:33:10.502775   41318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:33:10.517997   41318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:33:10.780316   41318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:33:10.817984   41318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:33:11.349685   41318 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1108 09:33:11.350904   41318 node_ready.go:35] waiting up to 15m0s for node "calico-615410" to be "Ready" ...
	I1108 09:33:11.895521   41318 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-615410" context rescaled to 1 replicas
	I1108 09:33:12.466687   41318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.686333868s)
	I1108 09:33:12.466764   41318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.648754599s)
	I1108 09:33:12.481691   41318 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:33:12.482674   41318 addons.go:515] duration metric: took 2.36783472s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:33:13.356683   41318 node_ready.go:57] node "calico-615410" has "Ready":"False" status (will retry)
	I1108 09:33:13.460250   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:33:13.460341   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:33:13.460362   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:13.497111   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:33:13.497138   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:33:13.768358   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:13.777618   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:33:13.777746   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:33:14.268357   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:14.275293   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:33:14.275510   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:33:14.769215   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:14.784533   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:33:14.784587   41684 api_server.go:103] status: https://192.168.39.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:33:15.269306   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:15.274742   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 200:
	ok
	I1108 09:33:15.283805   41684 api_server.go:141] control plane version: v1.34.1
	I1108 09:33:15.283834   41684 api_server.go:131] duration metric: took 5.015627913s to wait for apiserver health ...
	I1108 09:33:15.283845   41684 cni.go:84] Creating CNI manager for ""
	I1108 09:33:15.283853   41684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 09:33:15.288615   41684 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 09:33:15.289863   41684 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 09:33:15.306135   41684 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1108 09:33:15.337591   41684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:33:15.346076   41684 system_pods.go:59] 6 kube-system pods found
	I1108 09:33:15.346117   41684 system_pods.go:61] "coredns-66bc5c9577-bljvk" [ba662ec9-4f89-4b75-ad34-27e5fe5bba61] Running
	I1108 09:33:15.346132   41684 system_pods.go:61] "etcd-pause-022459" [6caec945-cd8e-4d36-9d98-e0346d82f48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:33:15.346141   41684 system_pods.go:61] "kube-apiserver-pause-022459" [952270ad-2035-4f93-b71e-8729e2ac93cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:33:15.346153   41684 system_pods.go:61] "kube-controller-manager-pause-022459" [7516e826-ffd2-41f4-8dec-13d28ac1fcf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:33:15.346162   41684 system_pods.go:61] "kube-proxy-jwkzf" [eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:33:15.346172   41684 system_pods.go:61] "kube-scheduler-pause-022459" [e6a79821-2954-4e41-9cb9-64d610f8cd24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:33:15.346181   41684 system_pods.go:74] duration metric: took 8.561982ms to wait for pod list to return data ...
	I1108 09:33:15.346191   41684 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:33:15.354130   41684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 09:33:15.354230   41684 node_conditions.go:123] node cpu capacity is 2
	I1108 09:33:15.354263   41684 node_conditions.go:105] duration metric: took 8.065549ms to run NodePressure ...
	I1108 09:33:15.354361   41684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:33:11.733961   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:11.735487   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:11.735533   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:11.736062   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:11.736102   42540 retry.go:31] will retry after 1.277034818s: waiting for domain to come up
	I1108 09:33:13.015163   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:13.015916   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:13.015937   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:13.016389   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:13.016424   42540 retry.go:31] will retry after 1.387705285s: waiting for domain to come up
	I1108 09:33:14.405813   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:14.406793   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:14.406814   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:14.407288   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:14.407326   42540 retry.go:31] will retry after 1.81408043s: waiting for domain to come up
	I1108 09:33:16.146681   41684 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1108 09:33:16.153742   41684 kubeadm.go:744] kubelet initialised
	I1108 09:33:16.153770   41684 kubeadm.go:745] duration metric: took 7.059023ms waiting for restarted kubelet to initialise ...
	I1108 09:33:16.153788   41684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:33:16.185984   41684 ops.go:34] apiserver oom_adj: -16
	I1108 09:33:16.186006   41684 kubeadm.go:602] duration metric: took 20.597208139s to restartPrimaryControlPlane
	I1108 09:33:16.186018   41684 kubeadm.go:403] duration metric: took 20.911122803s to StartCluster
	I1108 09:33:16.186038   41684 settings.go:142] acquiring lock: {Name:mk0d0617389eeb9d724259ab95a170c08eef0474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:16.186133   41684 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:33:16.187851   41684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5845/kubeconfig: {Name:mkc412363cfe82fe29e1a9ce488fc75c3202c245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:16.188169   41684 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:16.188441   41684 config.go:182] Loaded profile config "pause-022459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:16.188522   41684 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:33:16.189521   41684 out.go:179] * Verifying Kubernetes components...
	I1108 09:33:16.190054   41684 out.go:179] * Enabled addons: 
	W1108 09:33:15.361732   41318 node_ready.go:57] node "calico-615410" has "Ready":"False" status (will retry)
	W1108 09:33:17.364305   41318 node_ready.go:57] node "calico-615410" has "Ready":"False" status (will retry)
	I1108 09:33:18.358634   41318 node_ready.go:49] node "calico-615410" is "Ready"
	I1108 09:33:18.358679   41318 node_ready.go:38] duration metric: took 7.007720021s for node "calico-615410" to be "Ready" ...
	I1108 09:33:18.358698   41318 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:33:18.358819   41318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:18.415950   41318 api_server.go:72] duration metric: took 8.301323765s to wait for apiserver process to appear ...
	I1108 09:33:18.415983   41318 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:33:18.416015   41318 api_server.go:253] Checking apiserver healthz at https://192.168.83.75:8443/healthz ...
	I1108 09:33:18.427665   41318 api_server.go:279] https://192.168.83.75:8443/healthz returned 200:
	ok
	I1108 09:33:18.431912   41318 api_server.go:141] control plane version: v1.34.1
	I1108 09:33:18.431944   41318 api_server.go:131] duration metric: took 15.952314ms to wait for apiserver health ...
	I1108 09:33:18.431956   41318 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:33:18.438153   41318 system_pods.go:59] 9 kube-system pods found
	I1108 09:33:18.438194   41318 system_pods.go:61] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:18.438209   41318 system_pods.go:61] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:18.438221   41318 system_pods.go:61] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:18.438237   41318 system_pods.go:61] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:18.438251   41318 system_pods.go:61] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:18.438257   41318 system_pods.go:61] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:18.438263   41318 system_pods.go:61] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:18.438281   41318 system_pods.go:61] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:18.438301   41318 system_pods.go:61] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:18.438309   41318 system_pods.go:74] duration metric: took 6.346153ms to wait for pod list to return data ...
	I1108 09:33:18.438319   41318 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:33:18.446457   41318 default_sa.go:45] found service account: "default"
	I1108 09:33:18.446480   41318 default_sa.go:55] duration metric: took 8.149799ms for default service account to be created ...
	I1108 09:33:18.446490   41318 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:33:18.452690   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:18.452729   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:18.452741   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:18.452749   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:18.452754   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:18.452760   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:18.452767   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:18.452773   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:18.452783   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:18.452791   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:18.452826   41318 retry.go:31] will retry after 243.178969ms: missing components: kube-dns
	I1108 09:33:18.704420   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:18.704465   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:18.704480   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:18.704490   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:18.704522   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:18.704530   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:18.704537   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:18.704556   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:18.704561   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:18.704581   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:18.704606   41318 retry.go:31] will retry after 299.397948ms: missing components: kube-dns
	I1108 09:33:16.190703   41684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:33:16.191227   41684 addons.go:515] duration metric: took 2.730403ms for enable addons: enabled=[]
	I1108 09:33:16.517809   41684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:33:16.545401   41684 node_ready.go:35] waiting up to 6m0s for node "pause-022459" to be "Ready" ...
	I1108 09:33:16.550404   41684 node_ready.go:49] node "pause-022459" is "Ready"
	I1108 09:33:16.550438   41684 node_ready.go:38] duration metric: took 4.977123ms for node "pause-022459" to be "Ready" ...
	I1108 09:33:16.550453   41684 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:33:16.550528   41684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:33:16.581733   41684 api_server.go:72] duration metric: took 393.532938ms to wait for apiserver process to appear ...
	I1108 09:33:16.581762   41684 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:33:16.581778   41684 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8443/healthz ...
	I1108 09:33:16.589647   41684 api_server.go:279] https://192.168.39.96:8443/healthz returned 200:
	ok
	I1108 09:33:16.591292   41684 api_server.go:141] control plane version: v1.34.1
	I1108 09:33:16.591317   41684 api_server.go:131] duration metric: took 9.548627ms to wait for apiserver health ...
	I1108 09:33:16.591328   41684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:33:16.595836   41684 system_pods.go:59] 6 kube-system pods found
	I1108 09:33:16.595866   41684 system_pods.go:61] "coredns-66bc5c9577-bljvk" [ba662ec9-4f89-4b75-ad34-27e5fe5bba61] Running
	I1108 09:33:16.595877   41684 system_pods.go:61] "etcd-pause-022459" [6caec945-cd8e-4d36-9d98-e0346d82f48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:33:16.595886   41684 system_pods.go:61] "kube-apiserver-pause-022459" [952270ad-2035-4f93-b71e-8729e2ac93cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:33:16.595896   41684 system_pods.go:61] "kube-controller-manager-pause-022459" [7516e826-ffd2-41f4-8dec-13d28ac1fcf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:33:16.595903   41684 system_pods.go:61] "kube-proxy-jwkzf" [eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c] Running
	I1108 09:33:16.595912   41684 system_pods.go:61] "kube-scheduler-pause-022459" [e6a79821-2954-4e41-9cb9-64d610f8cd24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:33:16.595923   41684 system_pods.go:74] duration metric: took 4.587422ms to wait for pod list to return data ...
	I1108 09:33:16.595935   41684 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:33:16.600087   41684 default_sa.go:45] found service account: "default"
	I1108 09:33:16.600104   41684 default_sa.go:55] duration metric: took 4.164559ms for default service account to be created ...
	I1108 09:33:16.600112   41684 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:33:16.604053   41684 system_pods.go:86] 6 kube-system pods found
	I1108 09:33:16.604077   41684 system_pods.go:89] "coredns-66bc5c9577-bljvk" [ba662ec9-4f89-4b75-ad34-27e5fe5bba61] Running
	I1108 09:33:16.604088   41684 system_pods.go:89] "etcd-pause-022459" [6caec945-cd8e-4d36-9d98-e0346d82f48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:33:16.604098   41684 system_pods.go:89] "kube-apiserver-pause-022459" [952270ad-2035-4f93-b71e-8729e2ac93cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:33:16.604109   41684 system_pods.go:89] "kube-controller-manager-pause-022459" [7516e826-ffd2-41f4-8dec-13d28ac1fcf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:33:16.604118   41684 system_pods.go:89] "kube-proxy-jwkzf" [eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c] Running
	I1108 09:33:16.604126   41684 system_pods.go:89] "kube-scheduler-pause-022459" [e6a79821-2954-4e41-9cb9-64d610f8cd24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:33:16.604134   41684 system_pods.go:126] duration metric: took 4.017421ms to wait for k8s-apps to be running ...
	I1108 09:33:16.604146   41684 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:33:16.604197   41684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:33:16.627887   41684 system_svc.go:56] duration metric: took 23.730186ms WaitForService to wait for kubelet
	I1108 09:33:16.627921   41684 kubeadm.go:587] duration metric: took 439.721784ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:33:16.627942   41684 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:33:16.632609   41684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1108 09:33:16.632640   41684 node_conditions.go:123] node cpu capacity is 2
	I1108 09:33:16.632655   41684 node_conditions.go:105] duration metric: took 4.70671ms to run NodePressure ...
	I1108 09:33:16.632672   41684 start.go:242] waiting for startup goroutines ...
	I1108 09:33:16.632683   41684 start.go:247] waiting for cluster config update ...
	I1108 09:33:16.632707   41684 start.go:256] writing updated cluster config ...
	I1108 09:33:16.702104   41684 ssh_runner.go:195] Run: rm -f paused
	I1108 09:33:16.708550   41684 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:33:16.709327   41684 kapi.go:59] client config for pause-022459: &rest.Config{Host:"https://192.168.39.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/profiles/pause-022459/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:33:16.714352   41684 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bljvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:16.723675   41684 pod_ready.go:94] pod "coredns-66bc5c9577-bljvk" is "Ready"
	I1108 09:33:16.723709   41684 pod_ready.go:86] duration metric: took 9.335578ms for pod "coredns-66bc5c9577-bljvk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:16.727453   41684 pod_ready.go:83] waiting for pod "etcd-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:33:18.735526   41684 pod_ready.go:104] pod "etcd-pause-022459" is not "Ready", error: <nil>
	W1108 09:33:20.735670   41684 pod_ready.go:104] pod "etcd-pause-022459" is not "Ready", error: <nil>
	I1108 09:33:16.223266   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:16.224015   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:16.224032   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:16.224523   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:16.224565   42540 retry.go:31] will retry after 2.770031139s: waiting for domain to come up
	I1108 09:33:18.995813   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:18.996562   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:18.996584   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:18.996965   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:18.996999   42540 retry.go:31] will retry after 2.632439756s: waiting for domain to come up
	I1108 09:33:19.009672   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:19.009710   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:19.009734   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:19.009751   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:19.009761   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:19.009773   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:19.009782   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:19.009791   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:19.009801   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:19.009811   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:33:19.009835   41318 retry.go:31] will retry after 365.252388ms: missing components: kube-dns
	I1108 09:33:19.381387   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:19.381427   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:19.381446   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:19.381456   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:19.381466   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:19.381473   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:19.381480   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:19.381486   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:19.381519   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:19.381530   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:19.381552   41318 retry.go:31] will retry after 505.776445ms: missing components: kube-dns
	I1108 09:33:19.891960   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:19.891999   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:19.892017   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:19.892027   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:19.892033   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:19.892045   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:19.892057   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:19.892068   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:19.892073   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:19.892080   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:19.892103   41318 retry.go:31] will retry after 628.071399ms: missing components: kube-dns
	I1108 09:33:20.526230   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:20.526274   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:20.526286   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:20.526296   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:20.526307   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:20.526314   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:20.526319   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:20.526328   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:20.526333   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:20.526342   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:20.526360   41318 retry.go:31] will retry after 904.625149ms: missing components: kube-dns
	I1108 09:33:21.437696   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:21.437744   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:21.437757   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:21.437767   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:21.437777   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:21.437785   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:21.437790   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:21.437796   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:21.437806   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:21.437811   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:21.437827   41318 retry.go:31] will retry after 792.380889ms: missing components: kube-dns
	I1108 09:33:22.237481   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:22.237537   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:22.237553   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:22.237563   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:22.237572   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:22.237580   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:22.237585   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:22.237594   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:22.237602   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:22.237607   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:22.237625   41318 retry.go:31] will retry after 1.218879985s: missing components: kube-dns
	I1108 09:33:23.466488   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:23.466538   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:23.466549   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:23.466555   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:23.466560   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:23.466564   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:23.466567   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:23.466571   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:23.466575   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:23.466578   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:23.466592   41318 retry.go:31] will retry after 1.329928251s: missing components: kube-dns
	I1108 09:33:22.239587   41684 pod_ready.go:94] pod "etcd-pause-022459" is "Ready"
	I1108 09:33:22.239613   41684 pod_ready.go:86] duration metric: took 5.512129431s for pod "etcd-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:22.242115   41684 pod_ready.go:83] waiting for pod "kube-apiserver-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:33:24.252833   41684 pod_ready.go:104] pod "kube-apiserver-pause-022459" is not "Ready", error: <nil>
	I1108 09:33:21.630805   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:21.631633   42540 main.go:143] libmachine: no network interface addresses found for domain custom-flannel-615410 (source=lease)
	I1108 09:33:21.631657   42540 main.go:143] libmachine: trying to list again with source=arp
	I1108 09:33:21.632151   42540 main.go:143] libmachine: unable to find current IP address of domain custom-flannel-615410 in network mk-custom-flannel-615410 (interfaces detected: [])
	I1108 09:33:21.632195   42540 retry.go:31] will retry after 2.789068555s: waiting for domain to come up
	I1108 09:33:24.422649   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.423348   42540 main.go:143] libmachine: domain custom-flannel-615410 has current primary IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.423364   42540 main.go:143] libmachine: found domain IP: 192.168.72.152
	I1108 09:33:24.423372   42540 main.go:143] libmachine: reserving static IP address...
	I1108 09:33:24.423883   42540 main.go:143] libmachine: unable to find host DHCP lease matching {name: "custom-flannel-615410", mac: "52:54:00:a1:36:61", ip: "192.168.72.152"} in network mk-custom-flannel-615410
	I1108 09:33:24.679546   42540 main.go:143] libmachine: reserved static IP address 192.168.72.152 for domain custom-flannel-615410
	I1108 09:33:24.679576   42540 main.go:143] libmachine: waiting for SSH...
	I1108 09:33:24.679609   42540 main.go:143] libmachine: Getting to WaitForSSH function...
	I1108 09:33:24.683049   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.683622   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:24.683663   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.683942   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:24.684258   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:24.684276   42540 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1108 09:33:24.793656   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:33:24.794169   42540 main.go:143] libmachine: domain creation complete
	I1108 09:33:24.796091   42540 machine.go:94] provisionDockerMachine start ...
	I1108 09:33:24.799090   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.799644   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:24.799673   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.799865   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:24.800139   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:24.800153   42540 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:33:24.915766   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1108 09:33:24.915794   42540 buildroot.go:166] provisioning hostname "custom-flannel-615410"
	I1108 09:33:24.918934   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.919428   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:24.919457   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:24.919689   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:24.919960   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:24.919982   42540 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-615410 && echo "custom-flannel-615410" | sudo tee /etc/hostname
	I1108 09:33:25.050485   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-615410
	
	I1108 09:33:25.054118   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.054671   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.054715   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.054903   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:25.055166   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:25.055183   42540 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-615410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-615410/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-615410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:33:25.184716   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:33:25.184754   42540 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5845/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5845/.minikube}
	I1108 09:33:25.184797   42540 buildroot.go:174] setting up certificates
	I1108 09:33:25.184811   42540 provision.go:84] configureAuth start
	I1108 09:33:25.188581   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.189160   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.189202   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.192448   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.192975   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.193020   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.193186   42540 provision.go:143] copyHostCerts
	I1108 09:33:25.193247   42540 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem, removing ...
	I1108 09:33:25.193268   42540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem
	I1108 09:33:25.193354   42540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/ca.pem (1082 bytes)
	I1108 09:33:25.193479   42540 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem, removing ...
	I1108 09:33:25.193514   42540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem
	I1108 09:33:25.193571   42540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/cert.pem (1123 bytes)
	I1108 09:33:25.193677   42540 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem, removing ...
	I1108 09:33:25.193690   42540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem
	I1108 09:33:25.193731   42540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5845/.minikube/key.pem (1675 bytes)
	I1108 09:33:25.193809   42540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-615410 san=[127.0.0.1 192.168.72.152 custom-flannel-615410 localhost minikube]
	I1108 09:33:25.294409   42540 provision.go:177] copyRemoteCerts
	I1108 09:33:25.294464   42540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:33:25.297196   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.297608   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.297636   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.297760   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:25.386113   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:33:25.422609   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 09:33:25.453103   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:33:25.482876   42540 provision.go:87] duration metric: took 298.051404ms to configureAuth
	I1108 09:33:25.482907   42540 buildroot.go:189] setting minikube options for container-runtime
	I1108 09:33:25.483096   42540 config.go:182] Loaded profile config "custom-flannel-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:25.486073   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.486581   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.486612   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.486800   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:25.487042   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:25.487058   42540 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:33:25.734536   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:33:25.734579   42540 machine.go:97] duration metric: took 938.467968ms to provisionDockerMachine
	I1108 09:33:25.734590   42540 client.go:176] duration metric: took 19.609653202s to LocalClient.Create
	I1108 09:33:25.734609   42540 start.go:167] duration metric: took 19.609719823s to libmachine.API.Create "custom-flannel-615410"
	I1108 09:33:25.734617   42540 start.go:293] postStartSetup for "custom-flannel-615410" (driver="kvm2")
	I1108 09:33:25.734629   42540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:33:25.734692   42540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:33:25.738027   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.738459   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.738483   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.738681   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:25.834810   42540 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:33:25.841430   42540 info.go:137] Remote host: Buildroot 2025.02
	I1108 09:33:25.841467   42540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/addons for local assets ...
	I1108 09:33:25.841555   42540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5845/.minikube/files for local assets ...
	I1108 09:33:25.841665   42540 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem -> 97452.pem in /etc/ssl/certs
	I1108 09:33:25.841806   42540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:33:25.857215   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/ssl/certs/97452.pem --> /etc/ssl/certs/97452.pem (1708 bytes)
	I1108 09:33:25.891370   42540 start.go:296] duration metric: took 156.736127ms for postStartSetup
	I1108 09:33:25.895020   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.895571   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.895604   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.895902   42540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/config.json ...
	I1108 09:33:25.896131   42540 start.go:128] duration metric: took 19.773030774s to createHost
	I1108 09:33:25.898688   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.899066   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:25.899093   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:25.899282   42540 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:25.899591   42540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1108 09:33:25.899605   42540 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1108 09:33:26.014068   42540 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762594405.959814915
	
	I1108 09:33:26.014095   42540 fix.go:216] guest clock: 1762594405.959814915
	I1108 09:33:26.014104   42540 fix.go:229] Guest: 2025-11-08 09:33:25.959814915 +0000 UTC Remote: 2025-11-08 09:33:25.896144199 +0000 UTC m=+19.898782632 (delta=63.670716ms)
	I1108 09:33:26.014119   42540 fix.go:200] guest clock delta is within tolerance: 63.670716ms
	I1108 09:33:26.014123   42540 start.go:83] releasing machines lock for "custom-flannel-615410", held for 19.891121217s
	I1108 09:33:26.017247   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.017705   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:26.017729   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.018351   42540 ssh_runner.go:195] Run: cat /version.json
	I1108 09:33:26.018371   42540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:33:26.021715   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.021719   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.022304   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:26.022324   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:26.022343   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.022354   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:26.022532   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:26.022708   42540 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/custom-flannel-615410/id_rsa Username:docker}
	I1108 09:33:26.134644   42540 ssh_runner.go:195] Run: systemctl --version
	I1108 09:33:26.141626   42540 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:33:26.317052   42540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:33:26.325621   42540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:33:26.325680   42540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:33:26.350578   42540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:33:26.350606   42540 start.go:496] detecting cgroup driver to use...
	I1108 09:33:26.350680   42540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:33:26.374561   42540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:33:26.395607   42540 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:33:26.395693   42540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:33:26.416857   42540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:33:26.437078   42540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:33:26.613155   42540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:33:26.849240   42540 docker.go:234] disabling docker service ...
	I1108 09:33:26.849323   42540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:33:26.866404   42540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:33:26.882838   42540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:33:27.053946   42540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:33:27.241767   42540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:33:27.262643   42540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:33:27.296425   42540 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:33:27.296515   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.314293   42540 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:33:27.314372   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.328455   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.342952   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.356376   42540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:33:27.371255   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.385620   42540 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.411492   42540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:33:27.427044   42540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:33:27.439724   42540 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 09:33:27.439793   42540 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 09:33:27.464863   42540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:33:27.479288   42540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:33:27.645128   42540 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:33:28.090350   42540 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:33:28.090414   42540 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:33:28.098897   42540 start.go:564] Will wait 60s for crictl version
	I1108 09:33:28.098953   42540 ssh_runner.go:195] Run: which crictl
	I1108 09:33:28.103522   42540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 09:33:28.156915   42540 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1108 09:33:28.156998   42540 ssh_runner.go:195] Run: crio --version
	I1108 09:33:28.202920   42540 ssh_runner.go:195] Run: crio --version
	I1108 09:33:28.239674   42540 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	W1108 09:33:26.253103   41684 pod_ready.go:104] pod "kube-apiserver-pause-022459" is not "Ready", error: <nil>
	I1108 09:33:28.253121   41684 pod_ready.go:94] pod "kube-apiserver-pause-022459" is "Ready"
	I1108 09:33:28.253147   41684 pod_ready.go:86] duration metric: took 6.011010476s for pod "kube-apiserver-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.255688   41684 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.269480   41684 pod_ready.go:94] pod "kube-controller-manager-pause-022459" is "Ready"
	I1108 09:33:28.269528   41684 pod_ready.go:86] duration metric: took 13.811809ms for pod "kube-controller-manager-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.273068   41684 pod_ready.go:83] waiting for pod "kube-proxy-jwkzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.280191   41684 pod_ready.go:94] pod "kube-proxy-jwkzf" is "Ready"
	I1108 09:33:28.280214   41684 pod_ready.go:86] duration metric: took 7.122426ms for pod "kube-proxy-jwkzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.283256   41684 pod_ready.go:83] waiting for pod "kube-scheduler-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.446415   41684 pod_ready.go:94] pod "kube-scheduler-pause-022459" is "Ready"
	I1108 09:33:28.446452   41684 pod_ready.go:86] duration metric: took 163.170946ms for pod "kube-scheduler-pause-022459" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:33:28.446468   41684 pod_ready.go:40] duration metric: took 11.737884625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:33:28.497095   41684 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:33:28.498640   41684 out.go:179] * Done! kubectl is now configured to use "pause-022459" cluster and "default" namespace by default
	I1108 09:33:24.809221   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:24.809250   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:24.809261   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:24.809270   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:24.809276   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:24.809282   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:24.809287   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:24.809292   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:24.809297   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:24.809308   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:24.809325   41318 retry.go:31] will retry after 1.864369s: missing components: kube-dns
	I1108 09:33:26.680224   41318 system_pods.go:86] 9 kube-system pods found
	I1108 09:33:26.680264   41318 system_pods.go:89] "calico-kube-controllers-5766bdd7c-frkbg" [ea9ee21a-09db-4ffb-ab6f-76bc3578591c] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:33:26.680276   41318 system_pods.go:89] "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:33:26.680286   41318 system_pods.go:89] "coredns-66bc5c9577-wtpc4" [28c4e05e-193b-44f3-8785-16d39355b925] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:33:26.680298   41318 system_pods.go:89] "etcd-calico-615410" [1b533016-f8b6-4b2c-9f38-ed1d3f4a0395] Running
	I1108 09:33:26.680305   41318 system_pods.go:89] "kube-apiserver-calico-615410" [3e786a96-6553-4167-901a-ab5edcc39af7] Running
	I1108 09:33:26.680313   41318 system_pods.go:89] "kube-controller-manager-calico-615410" [81bca73c-b9e5-474c-bfba-a28f1903e9c6] Running
	I1108 09:33:26.680323   41318 system_pods.go:89] "kube-proxy-5dg56" [dfd6c115-e95e-46e7-918c-d319f0803361] Running
	I1108 09:33:26.680333   41318 system_pods.go:89] "kube-scheduler-calico-615410" [644323f0-affa-40ef-ac2b-b19e0d1e6054] Running
	I1108 09:33:26.680337   41318 system_pods.go:89] "storage-provisioner" [fc270551-dc36-4d04-bed7-5cfa8158d8c3] Running
	I1108 09:33:26.680352   41318 retry.go:31] will retry after 2.869651595s: missing components: kube-dns
	I1108 09:33:28.243953   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:28.244544   42540 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:36:61", ip: ""} in network mk-custom-flannel-615410: {Iface:virbr4 ExpiryTime:2025-11-08 10:33:24 +0000 UTC Type:0 Mac:52:54:00:a1:36:61 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:custom-flannel-615410 Clientid:01:52:54:00:a1:36:61}
	I1108 09:33:28.244611   42540 main.go:143] libmachine: domain custom-flannel-615410 has defined IP address 192.168.72.152 and MAC address 52:54:00:a1:36:61 in network mk-custom-flannel-615410
	I1108 09:33:28.244908   42540 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1108 09:33:28.250221   42540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:33:28.272200   42540 kubeadm.go:884] updating cluster {Name:custom-flannel-615410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.34.1 ClusterName:custom-flannel-615410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:33:28.272340   42540 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:28.272410   42540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:33:28.314258   42540 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1108 09:33:28.314339   42540 ssh_runner.go:195] Run: which lz4
	I1108 09:33:28.319307   42540 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1108 09:33:28.324718   42540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 09:33:28.324750   42540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1108 09:33:30.207080   42540 crio.go:462] duration metric: took 1.887796342s to copy over tarball
	I1108 09:33:30.207152   42540 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.111620186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594412111582870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9883c7b0-a903-4e91-8553-07a098761961 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.112244505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81728dcb-6e1d-4dac-9f5b-f49a8e7ad674 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.112611301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81728dcb-6e1d-4dac-9f5b-f49a8e7ad674 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.113280326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81728dcb-6e1d-4dac-9f5b-f49a8e7ad674 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.187011103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fbdc465-624f-4d0d-882e-8c1d45eaa231 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.187662294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fbdc465-624f-4d0d-882e-8c1d45eaa231 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.191585615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2755766-52ab-4ae4-823d-3544f7fde3e8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.192250499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594412192218429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2755766-52ab-4ae4-823d-3544f7fde3e8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.192961600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1e5ea6c-76c4-4164-b1bd-9b35d1917515 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.193037102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1e5ea6c-76c4-4164-b1bd-9b35d1917515 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.193362219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1e5ea6c-76c4-4164-b1bd-9b35d1917515 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.266481997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc60ae19-0ac1-43f5-b4e7-a6d805a45262 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.266612048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc60ae19-0ac1-43f5-b4e7-a6d805a45262 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.268632530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af3f5638-a2ae-48c7-8fe1-f7a55dc1eaa8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.269183517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594412269151673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af3f5638-a2ae-48c7-8fe1-f7a55dc1eaa8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.269934172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=679f3e50-b06d-4dca-9369-f57015878b89 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.270073156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=679f3e50-b06d-4dca-9369-f57015878b89 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.270537717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=679f3e50-b06d-4dca-9369-f57015878b89 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.356083679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=502d4ce2-ea67-4608-b38f-bc7ac278bfc1 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.356501981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=502d4ce2-ea67-4608-b38f-bc7ac278bfc1 name=/runtime.v1.RuntimeService/Version
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.358628708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c02374d-da2e-43ef-9e81-ebb5c32f8448 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.361381315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762594412361336340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c02374d-da2e-43ef-9e81-ebb5c32f8448 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.364355451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1b1b0bc-b44d-4ee7-84a0-ed4eddd9924f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.364518019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1b1b0bc-b44d-4ee7-84a0-ed4eddd9924f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 09:33:32 pause-022459 crio[2798]: time="2025-11-08 09:33:32.364986216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762594394407919050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762594389860952434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762594389837601657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc6
3c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762594389859885151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d3
6362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762594389792872742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe,PodSandboxId:67a5c2d4abb3fd531bbf5caf022125262d7953b0cf79ef237da7c3dfcd116ac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17625
94375813865445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987,PodSandboxId:e9c1d81364077a3ede68d8114b9a9d0d2a710861bd9108
d5ee82487c1c9b9527,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762594374742763358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwkzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c,PodSandboxId:cd87e1c46f3beb91359da36fa745ec9b0c7bf23c3569fe0f615449825ba616fd,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762594374841363413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aac9aa9657cbe0ee0c163fb07b3bfb9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:885714e5108a3c2966b2834c95a
a802e596aae683075e63e46febc1c5314fd70,PodSandboxId:1ca6ab3b485aa4470e12efc1bba116799bfb4125ccaad02277b3d645db1cc338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1762594374728812258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6f0907d7e974011d92c91aa0853cd5,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6,PodSandboxId:9177ade588811cb4f48fa1f53495df2385ee976b539d62ae6e913160fdf59242,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762594374670567363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaec152f6ad705ccc80ffd0d36362e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d,PodSandboxId:bd430f93a42026fc5a25c9380d54934b3cc5d90f8a222f489da24a95596b163d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762594374628528257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022459,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def352d64f30b28eafe8d23008c1c9f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba,PodSandboxId:22386cb0c7c044aca9cb1f4b33faeb0db06f5824a712303465c60b56adf3bdf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762594320616875377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bljvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba662ec9-4f89-4b75-ad34-27e5fe5bba61,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1b1b0bc-b44d-4ee7-84a0-ed4eddd9924f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	176ce5f957e27       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   18 seconds ago       Running             kube-proxy                2                   e9c1d81364077       kube-proxy-jwkzf
	7fc2093d83a09       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            2                   bd430f93a4202       kube-scheduler-pause-022459
	835741b73bd96       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      2                   9177ade588811       etcd-pause-022459
	ea2d4c0331e40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            2                   1ca6ab3b485aa       kube-apiserver-pause-022459
	a72f15a407001       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Running             kube-controller-manager   2                   cd87e1c46f3be       kube-controller-manager-pause-022459
	031e9527c6baf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   36 seconds ago       Running             coredns                   1                   67a5c2d4abb3f       coredns-66bc5c9577-bljvk
	4fcc16ab58f2c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago       Exited              kube-controller-manager   1                   cd87e1c46f3be       kube-controller-manager-pause-022459
	356af4bb0b055       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   37 seconds ago       Exited              kube-proxy                1                   e9c1d81364077       kube-proxy-jwkzf
	885714e5108a3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago       Exited              kube-apiserver            1                   1ca6ab3b485aa       kube-apiserver-pause-022459
	4e3c3d77a2027       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago       Exited              etcd                      1                   9177ade588811       etcd-pause-022459
	b7d2ad19411e4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago       Exited              kube-scheduler            1                   bd430f93a4202       kube-scheduler-pause-022459
	e6f2b9508a47c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   22386cb0c7c04       coredns-66bc5c9577-bljvk
	
	
	==> coredns [031e9527c6baf5ec42c13960f0a3775c1ed6b15c7dc2f007f8b0739fd1bf3bfe] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33398 - 44078 "HINFO IN 8167381204853300725.4431295867302043852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027160886s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [e6f2b9508a47c5a95c54c83cfab81df406c3f7e7f8b0f6206c7f9b72434f17ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36395 - 45747 "HINFO IN 6104717662577509140.8260565522814595785. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026327627s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-022459
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-022459
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=pause-022459
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_31_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:31:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-022459
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:33:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:33:13 +0000   Sat, 08 Nov 2025 09:31:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    pause-022459
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 25392fadbadd4ddd9e9cb8e77016aa89
	  System UUID:                25392fad-badd-4ddd-9e9c-b8e77016aa89
	  Boot ID:                    f0eb18d2-bb38-44f5-8e6f-f7348eb6731d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bljvk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     93s
	  kube-system                 etcd-pause-022459                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-022459             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-022459    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-jwkzf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-022459             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s (x8 over 105s)  kubelet          Node pause-022459 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 105s)  kubelet          Node pause-022459 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 105s)  kubelet          Node pause-022459 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node pause-022459 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  98s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node pause-022459 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node pause-022459 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeReady                97s                  kubelet          Node pause-022459 status is now: NodeReady
	  Normal  RegisteredNode           94s                  node-controller  Node pause-022459 event: Registered Node pause-022459 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-022459 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-022459 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-022459 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                  node-controller  Node pause-022459 event: Registered Node pause-022459 in Controller
	
	
	==> dmesg <==
	[Nov 8 09:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001483] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007486] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.173693] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.111838] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.091305] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.186817] kauditd_printk_skb: 171 callbacks suppressed
	[Nov 8 09:32] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.598394] kauditd_printk_skb: 219 callbacks suppressed
	[ +22.229738] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 8 09:33] kauditd_printk_skb: 321 callbacks suppressed
	[  +5.593301] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.549085] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [4e3c3d77a20275ca3b5bcd72af580b374f89f6c5c46b0cc9ece5605c82eae6d6] <==
	{"level":"warn","ts":"2025-11-08T09:32:57.077289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.091990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.108561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.128754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.144319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.161249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:32:57.269543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:33:06.233996Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T09:33:06.234072Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-022459","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	{"level":"error","ts":"2025-11-08T09:33:06.234151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:33:06.238824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:33:06.238929Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:33:06.238959Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4b4d4eeb3ae7df8","current-leader-member-id":"d4b4d4eeb3ae7df8"}
	{"level":"info","ts":"2025-11-08T09:33:06.239057Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-08T09:33:06.239071Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239717Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239754Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:33:06.239761Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.96:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239840Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:33:06.239852Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:33:06.239857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:33:06.243381Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"error","ts":"2025-11-08T09:33:06.243498Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.96:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:33:06.243532Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-11-08T09:33:06.243542Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-022459","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	
	
	==> etcd [835741b73bd96992e220494aef0411a362896ef9fb3d14feec5744015a202fa3] <==
	{"level":"warn","ts":"2025-11-08T09:33:12.274389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.292291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.304644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.317831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.332603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.341719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.356698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.368571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.377907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.387099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.400151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.412233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.433255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.448190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.462323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.477336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.488506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.500530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:33:12.609517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33512","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:33:16.035301Z","caller":"traceutil/trace.go:172","msg":"trace[404446499] linearizableReadLoop","detail":"{readStateIndex:557; appliedIndex:557; }","duration":"315.634815ms","start":"2025-11-08T09:33:15.719641Z","end":"2025-11-08T09:33:16.035276Z","steps":["trace[404446499] 'read index received'  (duration: 315.628328ms)","trace[404446499] 'applied index is now lower than readState.Index'  (duration: 5.589µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:33:16.126737Z","caller":"traceutil/trace.go:172","msg":"trace[1372632036] transaction","detail":"{read_only:false; number_of_response:0; response_revision:510; }","duration":"409.925155ms","start":"2025-11-08T09:33:15.716796Z","end":"2025-11-08T09:33:16.126721Z","steps":["trace[1372632036] 'process raft request'  (duration: 318.50919ms)","trace[1372632036] 'compare'  (duration: 91.337976ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:33:16.126843Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"407.126573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-08T09:33:16.126931Z","caller":"traceutil/trace.go:172","msg":"trace[1231036259] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:510; }","duration":"407.281777ms","start":"2025-11-08T09:33:15.719638Z","end":"2025-11-08T09:33:16.126919Z","steps":["trace[1231036259] 'agreement among raft nodes before linearized reading'  (duration: 315.751717ms)","trace[1231036259] 'range keys from in-memory index tree'  (duration: 91.296726ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:33:16.126964Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:33:15.719625Z","time spent":"407.326145ms","remote":"127.0.0.1:60978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-08T09:33:16.127204Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:33:15.716775Z","time spent":"410.006099ms","remote":"127.0.0.1:33008","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":29,"request content":"compare:<target:MOD key:\"/registry/rolebindings/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/rolebindings/kube-system/kube-proxy\" value_size:382 >> failure:<>"}
	
	
	==> kernel <==
	 09:33:32 up 2 min,  0 users,  load average: 1.04, 0.62, 0.25
	Linux pause-022459 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [885714e5108a3c2966b2834c95aa802e596aae683075e63e46febc1c5314fd70] <==
	{"level":"warn","ts":"2025-11-08T09:33:00.515191Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.541713Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.565112Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.590104Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.617512Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.644570Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.671158Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.696836Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.720185Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.744665Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.768388Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-08T09:33:00.791980Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00077f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E1108 09:33:00.792100       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	E1108 09:33:00.937199       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:00.939044       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:01.936943       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:01.938758       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:02.937281       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:02.939163       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:03.937161       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:03.938723       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:04.936918       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:04.939808       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1108 09:33:05.936663       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1108 09:33:05.939409       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [ea2d4c0331e40aae60227fc6f16266d90144369113fa0b0766d0c8cddfdc495a] <==
	I1108 09:33:13.538066       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:33:13.538074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:33:13.582004       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:33:13.602938       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:33:13.603046       1 policy_source.go:240] refreshing policies
	I1108 09:33:13.608161       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:33:13.613186       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:33:13.615083       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:33:13.616725       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:33:13.619659       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:33:13.619739       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:33:13.621842       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:33:13.623734       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:33:13.629842       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:33:13.638141       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:33:14.210900       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:33:14.436071       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1108 09:33:15.159862       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.96]
	I1108 09:33:15.163886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:33:15.174728       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:33:15.552559       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:33:15.637753       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:33:15.701238       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:33:15.715302       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:33:21.862774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4fcc16ab58f2c41aa14998ac75f5173952e393989167f118b7b6ddc595c5632c] <==
	
	
	==> kube-controller-manager [a72f15a4070014f3642a0f1f9cc9d331b5599007bf8fd76082eab364eeab670c] <==
	I1108 09:33:16.949250       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:33:16.953505       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:33:16.956208       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:33:16.958748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:33:16.961217       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:33:16.964005       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:33:16.964077       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:33:16.964152       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:33:16.964187       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:33:16.964196       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:33:16.965555       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:33:16.965926       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:33:16.966051       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:33:16.968915       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:33:16.969064       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:33:16.970462       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:33:16.972873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:33:16.972910       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:33:16.972921       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:33:16.978885       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:33:16.979570       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:33:16.982824       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:33:16.988527       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:33:17.000001       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:33:17.000202       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	
	
	==> kube-proxy [176ce5f957e2775b64eace0f1522dc9efb88a774e1331d0c81364f08ac574ced] <==
	I1108 09:33:14.788348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:33:14.889641       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:33:14.889700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.96"]
	E1108 09:33:14.889819       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:33:14.991186       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1108 09:33:14.991276       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 09:33:14.991308       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:33:15.016946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:33:15.017686       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:33:15.017759       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:33:15.030512       1 config.go:200] "Starting service config controller"
	I1108 09:33:15.030990       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:33:15.031267       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:33:15.031303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:33:15.031322       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:33:15.031328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:33:15.031859       1 config.go:309] "Starting node config controller"
	I1108 09:33:15.031950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:33:15.131848       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:33:15.131885       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:33:15.131923       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:33:15.132197       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987] <==
	
	
	==> kube-scheduler [7fc2093d83a09e3f3475f06ff43d4a6d522f69b60ee7488bce50159ca306f059] <==
	I1108 09:33:12.169967       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:33:13.544662       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:33:13.544753       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:33:13.544784       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:33:13.546493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:33:13.581871       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:33:13.582180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:33:13.585380       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:13.585486       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:13.587680       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:33:13.587771       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:33:13.686566       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b7d2ad19411e4e2fae252133687dc40c383f8de4bb349e2f74d56a7639ed548d] <==
	E1108 09:33:02.229247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:33:02.329214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.96:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:33:02.569628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.96:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:33:02.638350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:33:02.736008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.96:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:33:02.877747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.96:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:33:03.103172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.96:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:33:03.111981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.96:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:33:03.335777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.96:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:33:03.371770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:33:03.440097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:33:03.520284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.96:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:33:03.645040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.96:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:33:03.757073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.96:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:33:05.467102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.96:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:33:05.675655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.96:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:33:06.002230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.96:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:33:06.386322       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1108 09:33:06.386713       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1108 09:33:06.387203       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:06.387253       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:33:06.387540       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 09:33:06.387988       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 09:33:06.388523       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 09:33:06.388850       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 08 09:33:12 pause-022459 kubelet[3818]: E1108 09:33:12.443903    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:12 pause-022459 kubelet[3818]: E1108 09:33:12.444155    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.448871    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.449315    3818 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022459\" not found" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.498108    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.648575    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-022459\" already exists" pod="kube-system/kube-controller-manager-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.648633    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.669597    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-022459\" already exists" pod="kube-system/kube-scheduler-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.669810    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.671096    3818 kubelet_node_status.go:124] "Node was previously registered" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.671206    3818 kubelet_node_status.go:78] "Successfully registered node" node="pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.671239    3818 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.673032    3818 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.697921    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-022459\" already exists" pod="kube-system/etcd-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: I1108 09:33:13.697952    3818 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-022459"
	Nov 08 09:33:13 pause-022459 kubelet[3818]: E1108 09:33:13.716023    3818 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022459\" already exists" pod="kube-system/kube-apiserver-pause-022459"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.073749    3818 apiserver.go:52] "Watching apiserver"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.117924    3818 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.202968    3818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c-lib-modules\") pod \"kube-proxy-jwkzf\" (UID: \"eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c\") " pod="kube-system/kube-proxy-jwkzf"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.203056    3818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c-xtables-lock\") pod \"kube-proxy-jwkzf\" (UID: \"eb3379ad-b4dc-4d7c-985a-8f97b5fa7e9c\") " pod="kube-system/kube-proxy-jwkzf"
	Nov 08 09:33:14 pause-022459 kubelet[3818]: I1108 09:33:14.387085    3818 scope.go:117] "RemoveContainer" containerID="356af4bb0b055ce67b14e2e5470b8d8eb0ed6533a385c5d374b7266c8295a987"
	Nov 08 09:33:19 pause-022459 kubelet[3818]: E1108 09:33:19.300744    3818 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762594399299774726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 08 09:33:19 pause-022459 kubelet[3818]: E1108 09:33:19.300800    3818 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762594399299774726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 08 09:33:29 pause-022459 kubelet[3818]: E1108 09:33:29.305329    3818 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762594409303676529  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 08 09:33:29 pause-022459 kubelet[3818]: E1108 09:33:29.305485    3818 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762594409303676529  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022459 -n pause-022459
helpers_test.go:269: (dbg) Run:  kubectl --context pause-022459 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.98s)

                                                
                                    

Test pass (301/344)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 24.31
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 14.04
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.8
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
22 TestOffline 121.64
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 140.28
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.56
35 TestAddons/parallel/Registry 18.14
36 TestAddons/parallel/RegistryCreds 0.9
38 TestAddons/parallel/InspektorGadget 11.98
39 TestAddons/parallel/MetricsServer 6.43
41 TestAddons/parallel/CSI 63.34
42 TestAddons/parallel/Headlamp 20.98
43 TestAddons/parallel/CloudSpanner 6.67
44 TestAddons/parallel/LocalPath 58.05
45 TestAddons/parallel/NvidiaDevicePlugin 6.96
46 TestAddons/parallel/Yakd 10.79
48 TestAddons/StoppedEnableDisable 87.55
49 TestCertOptions 51.09
50 TestCertExpiration 273.82
52 TestForceSystemdFlag 60.88
53 TestForceSystemdEnv 61.46
58 TestErrorSpam/setup 39.67
59 TestErrorSpam/start 0.31
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.56
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 84.2
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.69
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32.7
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.29
75 TestFunctional/serial/CacheCmd/cache/add_local 2.27
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 35.56
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.79
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 22.7
91 TestFunctional/parallel/DryRun 0.2
92 TestFunctional/parallel/InternationalLanguage 0.11
93 TestFunctional/parallel/StatusCmd 0.7
97 TestFunctional/parallel/ServiceCmdConnect 20.46
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 43.4
101 TestFunctional/parallel/SSHCmd 0.3
102 TestFunctional/parallel/CpCmd 1.05
103 TestFunctional/parallel/MySQL 22.04
104 TestFunctional/parallel/FileSync 0.16
105 TestFunctional/parallel/CertSync 1.05
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.31
113 TestFunctional/parallel/License 0.46
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.42
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
121 TestFunctional/parallel/ImageCommands/Setup 1.94
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
135 TestFunctional/parallel/MountCmd/any-port 19.99
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.31
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 6.8
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.05
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
142 TestFunctional/parallel/MountCmd/specific-port 1.31
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.06
144 TestFunctional/parallel/ServiceCmd/DeployApp 19.16
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
146 TestFunctional/parallel/ProfileCmd/profile_list 0.32
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
148 TestFunctional/parallel/ServiceCmd/List 1.21
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.19
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
151 TestFunctional/parallel/ServiceCmd/Format 0.22
152 TestFunctional/parallel/ServiceCmd/URL 0.24
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 247.56
161 TestMultiControlPlane/serial/DeployApp 8.09
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 44.84
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
166 TestMultiControlPlane/serial/CopyFile 10.62
167 TestMultiControlPlane/serial/StopSecondaryNode 88
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
169 TestMultiControlPlane/serial/RestartSecondaryNode 43.51
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.72
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 367.12
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.24
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 253.61
175 TestMultiControlPlane/serial/RestartCluster 104.03
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
177 TestMultiControlPlane/serial/AddSecondaryNode 91.89
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
183 TestJSONOutput/start/Command 83.46
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.26
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 88.94
215 TestMountStart/serial/StartWithMountFirst 22.95
216 TestMountStart/serial/VerifyMountFirst 0.29
217 TestMountStart/serial/StartWithMountSecond 24.66
218 TestMountStart/serial/VerifyMountSecond 0.29
219 TestMountStart/serial/DeleteFirst 0.67
220 TestMountStart/serial/VerifyMountPostDelete 0.29
221 TestMountStart/serial/Stop 1.26
222 TestMountStart/serial/RestartStopped 21.31
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 99.95
227 TestMultiNode/serial/DeployApp2Nodes 6.4
228 TestMultiNode/serial/PingHostFrom2Pods 0.83
229 TestMultiNode/serial/AddNode 48.4
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.44
232 TestMultiNode/serial/CopyFile 5.83
233 TestMultiNode/serial/StopNode 2.3
234 TestMultiNode/serial/StartAfterStop 40.55
235 TestMultiNode/serial/RestartKeepsNodes 308.84
236 TestMultiNode/serial/DeleteNode 2.65
237 TestMultiNode/serial/StopMultiNode 165.91
238 TestMultiNode/serial/RestartMultiNode 89.47
239 TestMultiNode/serial/ValidateNameConflict 39.97
246 TestScheduledStopUnix 110.5
250 TestRunningBinaryUpgrade 139.61
252 TestKubernetesUpgrade 174.06
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 101.72
257 TestNoKubernetes/serial/StartWithStopK8s 6.55
258 TestNoKubernetes/serial/Start 25.63
266 TestNetworkPlugins/group/false 3.22
270 TestISOImage/Setup 40.76
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
272 TestNoKubernetes/serial/ProfileList 0.79
273 TestNoKubernetes/serial/Stop 1.33
274 TestNoKubernetes/serial/StartNoArgs 55.55
276 TestISOImage/Binaries/crictl 0.19
277 TestISOImage/Binaries/curl 0.2
278 TestISOImage/Binaries/docker 0.18
279 TestISOImage/Binaries/git 0.19
280 TestISOImage/Binaries/iptables 0.19
281 TestISOImage/Binaries/podman 0.19
282 TestISOImage/Binaries/rsync 0.18
283 TestISOImage/Binaries/socat 0.2
284 TestISOImage/Binaries/wget 0.18
285 TestISOImage/Binaries/VBoxControl 0.18
286 TestISOImage/Binaries/VBoxService 0.18
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
288 TestStoppedBinaryUpgrade/Setup 3.04
289 TestStoppedBinaryUpgrade/Upgrade 122.02
298 TestPause/serial/Start 105.49
299 TestNetworkPlugins/group/auto/Start 70.48
300 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
301 TestNetworkPlugins/group/kindnet/Start 88.36
302 TestNetworkPlugins/group/calico/Start 93.56
304 TestNetworkPlugins/group/auto/KubeletFlags 0.17
305 TestNetworkPlugins/group/auto/NetCatPod 11.28
306 TestNetworkPlugins/group/auto/DNS 0.19
307 TestNetworkPlugins/group/auto/Localhost 0.16
308 TestNetworkPlugins/group/auto/HairPin 0.16
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/custom-flannel/Start 74.45
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
312 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
313 TestNetworkPlugins/group/kindnet/DNS 0.17
314 TestNetworkPlugins/group/kindnet/Localhost 0.14
315 TestNetworkPlugins/group/kindnet/HairPin 0.16
316 TestNetworkPlugins/group/enable-default-cni/Start 92.88
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/bridge/Start 80.92
319 TestNetworkPlugins/group/calico/KubeletFlags 0.19
320 TestNetworkPlugins/group/calico/NetCatPod 12.27
321 TestNetworkPlugins/group/calico/DNS 0.16
322 TestNetworkPlugins/group/calico/Localhost 0.15
323 TestNetworkPlugins/group/calico/HairPin 0.12
324 TestNetworkPlugins/group/flannel/Start 81.16
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.17
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.35
327 TestNetworkPlugins/group/custom-flannel/DNS 0.17
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
331 TestStartStop/group/old-k8s-version/serial/FirstStart 96.02
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
333 TestNetworkPlugins/group/bridge/NetCatPod 11.3
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
336 TestNetworkPlugins/group/bridge/DNS 0.2
337 TestNetworkPlugins/group/bridge/Localhost 0.2
338 TestNetworkPlugins/group/bridge/HairPin 0.18
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
343 TestStartStop/group/no-preload/serial/FirstStart 104.16
344 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestStartStop/group/embed-certs/serial/FirstStart 97.83
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
348 TestNetworkPlugins/group/flannel/NetCatPod 11.27
349 TestNetworkPlugins/group/flannel/DNS 0.19
350 TestNetworkPlugins/group/flannel/Localhost 0.16
351 TestNetworkPlugins/group/flannel/HairPin 0.17
353 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.86
354 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.71
356 TestStartStop/group/old-k8s-version/serial/Stop 86.17
357 TestStartStop/group/no-preload/serial/DeployApp 12.28
358 TestStartStop/group/embed-certs/serial/DeployApp 10.28
359 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
361 TestStartStop/group/no-preload/serial/Stop 81
362 TestStartStop/group/embed-certs/serial/Stop 86.23
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 86.11
366 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
367 TestStartStop/group/old-k8s-version/serial/SecondStart 46.74
368 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
369 TestStartStop/group/no-preload/serial/SecondStart 62.07
370 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11.01
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/embed-certs/serial/SecondStart 60.89
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
375 TestStartStop/group/old-k8s-version/serial/Pause 3.03
377 TestStartStop/group/newest-cni/serial/FirstStart 67.26
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 83.06
380 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.05
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
384 TestStartStop/group/no-preload/serial/Pause 3.64
386 TestISOImage/PersistentMounts//data 0.19
387 TestISOImage/PersistentMounts//var/lib/docker 0.18
388 TestISOImage/PersistentMounts//var/lib/cni 0.2
389 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
390 TestISOImage/PersistentMounts//var/lib/minikube 0.19
391 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
392 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
393 TestISOImage/VersionJSON 0.18
394 TestISOImage/eBPFSupport 0.19
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
397 TestStartStop/group/embed-certs/serial/Pause 3.56
398 TestStartStop/group/newest-cni/serial/DeployApp 0
399 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
400 TestStartStop/group/newest-cni/serial/Stop 10.86
401 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
402 TestStartStop/group/newest-cni/serial/SecondStart 35.22
403 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
404 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
405 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
406 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
407 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
410 TestStartStop/group/newest-cni/serial/Pause 2.56
x
+
TestDownloadOnly/v1.28.0/json-events (24.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-544510 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-544510 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.309583296s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (24.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1108 08:29:22.922606    9745 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1108 08:29:22.922704    9745 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-544510
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-544510: exit status 85 (69.493133ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-544510 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-544510 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:28:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:28:58.662376    9756 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:28:58.662627    9756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:58.662637    9756 out.go:374] Setting ErrFile to fd 2...
	I1108 08:28:58.662641    9756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:58.662807    9756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	W1108 08:28:58.662934    9756 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21866-5845/.minikube/config/config.json: open /home/jenkins/minikube-integration/21866-5845/.minikube/config/config.json: no such file or directory
	I1108 08:28:58.663369    9756 out.go:368] Setting JSON to true
	I1108 08:28:58.664291    9756 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":680,"bootTime":1762589859,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:28:58.664373    9756 start.go:143] virtualization: kvm guest
	I1108 08:28:58.666427    9756 out.go:99] [download-only-544510] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:28:58.666578    9756 notify.go:221] Checking for updates...
	W1108 08:28:58.666585    9756 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 08:28:58.667700    9756 out.go:171] MINIKUBE_LOCATION=21866
	I1108 08:28:58.668976    9756 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:28:58.670050    9756 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 08:28:58.671181    9756 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:28:58.672488    9756 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 08:28:58.674624    9756 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 08:28:58.674900    9756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:28:59.126176    9756 out.go:99] Using the kvm2 driver based on user configuration
	I1108 08:28:59.126208    9756 start.go:309] selected driver: kvm2
	I1108 08:28:59.126232    9756 start.go:930] validating driver "kvm2" against <nil>
	I1108 08:28:59.126598    9756 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:28:59.127065    9756 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1108 08:28:59.127221    9756 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 08:28:59.127248    9756 cni.go:84] Creating CNI manager for ""
	I1108 08:28:59.127297    9756 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 08:28:59.127306    9756 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1108 08:28:59.127344    9756 start.go:353] cluster config:
	{Name:download-only-544510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-544510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:28:59.127512    9756 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 08:28:59.128839    9756 out.go:99] Downloading VM boot image ...
	I1108 08:28:59.128865    9756 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21866-5845/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1108 08:29:10.449310    9756 out.go:99] Starting "download-only-544510" primary control-plane node in "download-only-544510" cluster
	I1108 08:29:10.449330    9756 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 08:29:10.567307    9756 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1108 08:29:10.567341    9756 cache.go:59] Caching tarball of preloaded images
	I1108 08:29:10.567484    9756 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 08:29:10.569309    9756 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1108 08:29:10.569328    9756 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1108 08:29:10.679817    9756 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1108 08:29:10.679938    9756 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-544510 host does not exist
	  To start a cluster, run: "minikube start -p download-only-544510"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-544510
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (14.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-567976 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-567976 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.042904955s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (14.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1108 08:29:37.328873    9745 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 08:29:37.328922    9745 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-567976
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-567976: exit status 85 (803.948476ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-544510 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-544510 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │ 08 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-544510                                                                                                                                                 │ download-only-544510 │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │ 08 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-567976 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-567976 │ jenkins │ v1.37.0 │ 08 Nov 25 08:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:29:23
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:29:23.336394   10018 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:29:23.336669   10018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:29:23.336679   10018 out.go:374] Setting ErrFile to fd 2...
	I1108 08:29:23.336684   10018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:29:23.336872   10018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 08:29:23.337300   10018 out.go:368] Setting JSON to true
	I1108 08:29:23.338066   10018 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":704,"bootTime":1762589859,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:29:23.338153   10018 start.go:143] virtualization: kvm guest
	I1108 08:29:23.339792   10018 out.go:99] [download-only-567976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:29:23.339933   10018 notify.go:221] Checking for updates...
	I1108 08:29:23.340926   10018 out.go:171] MINIKUBE_LOCATION=21866
	I1108 08:29:23.342032   10018 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:29:23.343043   10018 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 08:29:23.344063   10018 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:29:23.344967   10018 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 08:29:23.346735   10018 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 08:29:23.346957   10018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:29:23.379027   10018 out.go:99] Using the kvm2 driver based on user configuration
	I1108 08:29:23.379055   10018 start.go:309] selected driver: kvm2
	I1108 08:29:23.379063   10018 start.go:930] validating driver "kvm2" against <nil>
	I1108 08:29:23.379377   10018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:29:23.380416   10018 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1108 08:29:23.380577   10018 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 08:29:23.380602   10018 cni.go:84] Creating CNI manager for ""
	I1108 08:29:23.380642   10018 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 08:29:23.380650   10018 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1108 08:29:23.380698   10018 start.go:353] cluster config:
	{Name:download-only-567976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-567976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:29:23.380777   10018 iso.go:125] acquiring lock: {Name:mk35471d67475e3bd3529d4c69b70bc7e073ac33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 08:29:23.381838   10018 out.go:99] Starting "download-only-567976" primary control-plane node in "download-only-567976" cluster
	I1108 08:29:23.381851   10018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:23.491573   10018 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 08:29:23.491601   10018 cache.go:59] Caching tarball of preloaded images
	I1108 08:29:23.491750   10018 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:23.493262   10018 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1108 08:29:23.493285   10018 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1108 08:29:23.605195   10018 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1108 08:29:23.605245   10018 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21866-5845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-567976 host does not exist
	  To start a cluster, run: "minikube start -p download-only-567976"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-567976
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1108 08:29:38.703916    9745 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-801039 --alsologtostderr --binary-mirror http://127.0.0.1:39003 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-801039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-801039
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (121.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-175102 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-175102 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (2m0.731556194s)
helpers_test.go:175: Cleaning up "offline-crio-175102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-175102
--- PASS: TestOffline (121.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-982714
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-982714: exit status 85 (64.54867ms)

                                                
                                                
-- stdout --
	* Profile "addons-982714" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-982714"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-982714
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-982714: exit status 85 (63.570153ms)

                                                
                                                
-- stdout --
	* Profile "addons-982714" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-982714"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (140.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-982714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-982714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.279978135s)
--- PASS: TestAddons/Setup (140.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-982714 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-982714 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-982714 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-982714 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [37c39924-9493-4bce-a21a-b32309dde4c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [37c39924-9493-4bce-a21a-b32309dde4c6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.005714672s
addons_test.go:694: (dbg) Run:  kubectl --context addons-982714 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-982714 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-982714 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.504852ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-drc2f" [aaf266db-6e91-4084-a296-d03377708fa1] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007168744s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-f4pgf" [3938b218-293a-4387-b044-708a50497e10] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.002801275s
addons_test.go:392: (dbg) Run:  kubectl --context addons-982714 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-982714 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-982714 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.355465272s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 ip
2025/11/08 08:32:37 [DEBUG] GET http://192.168.39.224:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.14s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.9s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 11.141325ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-982714
addons_test.go:332: (dbg) Run:  kubectl --context addons-982714 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-q2t6d" [51bf80cb-7e99-4e62-bd1c-31a9cabdf3b8] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.014875117s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable inspektor-gadget --alsologtostderr -v=1: (5.961432917s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.539476ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-dsrgz" [60d165fd-f7eb-4b84-9108-14949a5300e7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009010714s
addons_test.go:463: (dbg) Run:  kubectl --context addons-982714 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable metrics-server --alsologtostderr -v=1: (1.347556259s)
--- PASS: TestAddons/parallel/MetricsServer (6.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1108 08:32:33.707477    9745 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1108 08:32:33.713517    9745 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1108 08:32:33.713537    9745 kapi.go:107] duration metric: took 6.076947ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.08567ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-982714 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-982714 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [f5bbbe09-fe01-4cad-926e-b675d3ec3a5b] Pending
helpers_test.go:352: "task-pv-pod" [f5bbbe09-fe01-4cad-926e-b675d3ec3a5b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [f5bbbe09-fe01-4cad-926e-b675d3ec3a5b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003650091s
addons_test.go:572: (dbg) Run:  kubectl --context addons-982714 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-982714 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-982714 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-982714 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-982714 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-982714 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-982714 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1a9f487b-53d1-43d1-84dc-b1a8140b52b0] Pending
helpers_test.go:352: "task-pv-pod-restore" [1a9f487b-53d1-43d1-84dc-b1a8140b52b0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1a9f487b-53d1-43d1-84dc-b1a8140b52b0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004974502s
addons_test.go:614: (dbg) Run:  kubectl --context addons-982714 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-982714 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-982714 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80149749s)
--- PASS: TestAddons/parallel/CSI (63.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-982714 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-8674984b5f-f55gl" [b0c3c70d-e4c8-4d76-98fb-ea18e8979254] Pending
helpers_test.go:352: "headlamp-8674984b5f-f55gl" [b0c3c70d-e4c8-4d76-98fb-ea18e8979254] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-8674984b5f-f55gl" [b0c3c70d-e4c8-4d76-98fb-ea18e8979254] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.006176811s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable headlamp --alsologtostderr -v=1: (6.065630029s)
--- PASS: TestAddons/parallel/Headlamp (20.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-nwfx9" [0065d866-ea72-46b1-be24-15c2cf58ce70] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.035695779s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-982714 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-982714 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6865c5b0-07e5-452f-a3b6-5a6a13078b38] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6865c5b0-07e5-452f-a3b6-5a6a13078b38] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6865c5b0-07e5-452f-a3b6-5a6a13078b38] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004071356s
addons_test.go:967: (dbg) Run:  kubectl --context addons-982714 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 ssh "cat /opt/local-path-provisioner/pvc-6b247cde-862c-46c9-bdd2-91fe4ed25c39_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-982714 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-982714 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.220242681s)
--- PASS: TestAddons/parallel/LocalPath (58.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9nlkp" [34beaf17-15b2-4f57-ad8f-fed0eb6775be] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0438619s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2jw68" [0f43625b-ab7a-4359-adfd-b18436117e48] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00364496s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982714 addons disable yakd --alsologtostderr -v=1: (5.79007988s)
--- PASS: TestAddons/parallel/Yakd (10.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-982714
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-982714: (1m27.366378925s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-982714
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-982714
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-982714
--- PASS: TestAddons/StoppedEnableDisable (87.55s)

                                                
                                    
x
+
TestCertOptions (51.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-476448 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-476448 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (49.156486911s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-476448 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-476448 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-476448 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-476448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-476448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-476448: (1.506481267s)
--- PASS: TestCertOptions (51.09s)

                                                
                                    
x
+
TestCertExpiration (273.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-349612 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-349612 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (40.636144915s)
E1108 09:27:00.325276    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-349612 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-349612 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (52.259478332s)
helpers_test.go:175: Cleaning up "cert-expiration-349612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-349612
--- PASS: TestCertExpiration (273.82s)

                                                
                                    
x
+
TestForceSystemdFlag (60.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-074000 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-074000 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.387605027s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-074000 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-074000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-074000
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-074000: (1.300366102s)
--- PASS: TestForceSystemdFlag (60.88s)

                                                
                                    
x
+
TestForceSystemdEnv (61.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-284568 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-284568 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.443294612s)
helpers_test.go:175: Cleaning up "force-systemd-env-284568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-284568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-284568: (1.01615868s)
--- PASS: TestForceSystemdEnv (61.46s)

                                                
                                    
x
+
TestErrorSpam/setup (39.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-841337 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-841337 --driver=kvm2  --container-runtime=crio
E1108 08:37:00.333869    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.340233    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.351556    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.372894    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.414297    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.495755    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.657264    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:00.978969    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:01.621040    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:02.902669    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:05.464639    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:10.586011    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:37:20.828332    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-841337 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-841337 --driver=kvm2  --container-runtime=crio: (39.667262591s)
--- PASS: TestErrorSpam/setup (39.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (84.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 stop
E1108 08:37:41.309731    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:38:22.272842    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 stop: (1m20.970297783s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 stop: (1.848410431s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-841337 --log_dir /tmp/nospam-841337 stop: (1.380141918s)
--- PASS: TestErrorSpam/stop (84.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21866-5845/.minikube/files/etc/test/nested/copy/9745/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427090 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1108 08:39:44.197439    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-427090 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.694363288s)
--- PASS: TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1108 08:40:14.424877    9745 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427090 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-427090 --alsologtostderr -v=8: (32.694740676s)
functional_test.go:678: soft start took 32.695425846s for "functional-427090" cluster.
I1108 08:40:47.119924    9745 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (32.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-427090 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 cache add registry.k8s.io/pause:3.1: (1.042248279s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 cache add registry.k8s.io/pause:3.3: (1.171395593s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 cache add registry.k8s.io/pause:latest: (1.080915325s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-427090 /tmp/TestFunctionalserialCacheCmdcacheadd_local983435063/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cache add minikube-local-cache-test:functional-427090
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 cache add minikube-local-cache-test:functional-427090: (1.941928618s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cache delete minikube-local-cache-test:functional-427090
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-427090
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.331953ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 kubectl -- --context functional-427090 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-427090 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427090 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-427090 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.559238773s)
functional_test.go:776: restart took 35.559377178s for "functional-427090" cluster.
I1108 08:41:30.538162    9745 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (35.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-427090 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 logs: (1.4615901s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 logs --file /tmp/TestFunctionalserialLogsFileCmd482746540/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 logs --file /tmp/TestFunctionalserialLogsFileCmd482746540/001/logs.txt: (1.484994983s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-427090 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-427090
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-427090: exit status 115 (223.01043ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.175:30097 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-427090 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-427090 delete -f testdata/invalidsvc.yaml: (1.336992683s)
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 config get cpus: exit status 14 (58.472198ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 config get cpus: exit status 14 (62.874124ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-427090 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-427090 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 16013: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427090 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-427090 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (100.263451ms)

                                                
                                                
-- stdout --
	* [functional-427090] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:41:54.123685   15919 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:41:54.123784   15919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:41:54.123794   15919 out.go:374] Setting ErrFile to fd 2...
	I1108 08:41:54.123801   15919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:41:54.123993   15919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 08:41:54.124404   15919 out.go:368] Setting JSON to false
	I1108 08:41:54.125220   15919 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1455,"bootTime":1762589859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:41:54.125311   15919 start.go:143] virtualization: kvm guest
	I1108 08:41:54.127176   15919 out.go:179] * [functional-427090] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:41:54.128319   15919 notify.go:221] Checking for updates...
	I1108 08:41:54.128349   15919 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:41:54.129522   15919 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:41:54.130630   15919 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 08:41:54.131588   15919 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:41:54.132544   15919 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:41:54.133536   15919 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:41:54.134842   15919 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:41:54.135420   15919 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:41:54.165010   15919 out.go:179] * Using the kvm2 driver based on existing profile
	I1108 08:41:54.166076   15919 start.go:309] selected driver: kvm2
	I1108 08:41:54.166089   15919 start.go:930] validating driver "kvm2" against &{Name:functional-427090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-427090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:41:54.166179   15919 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:41:54.167899   15919 out.go:203] 
	W1108 08:41:54.168958   15919 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 08:41:54.169944   15919 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427090 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427090 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-427090 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (106.920979ms)

                                                
                                                
-- stdout --
	* [functional-427090] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:41:40.765051   15558 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:41:40.765320   15558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:41:40.765329   15558 out.go:374] Setting ErrFile to fd 2...
	I1108 08:41:40.765333   15558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:41:40.765663   15558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 08:41:40.766110   15558 out.go:368] Setting JSON to false
	I1108 08:41:40.766934   15558 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1442,"bootTime":1762589859,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:41:40.767027   15558 start.go:143] virtualization: kvm guest
	I1108 08:41:40.769142   15558 out.go:179] * [functional-427090] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1108 08:41:40.770260   15558 notify.go:221] Checking for updates...
	I1108 08:41:40.770308   15558 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:41:40.771384   15558 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:41:40.772513   15558 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 08:41:40.773663   15558 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 08:41:40.774716   15558 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:41:40.775692   15558 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:41:40.777068   15558 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:41:40.777480   15558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:41:40.809904   15558 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1108 08:41:40.811054   15558 start.go:309] selected driver: kvm2
	I1108 08:41:40.811067   15558 start.go:930] validating driver "kvm2" against &{Name:functional-427090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-427090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:41:40.811151   15558 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:41:40.812884   15558 out.go:203] 
	W1108 08:41:40.813841   15558 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 08:41:40.814767   15558 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-427090 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-427090 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8mkjr" [c91327ff-efe8-4050-932e-c54ed255eac9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-8mkjr" [c91327ff-efe8-4050-932e-c54ed255eac9] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.00424014s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.175:30335
functional_test.go:1680: http://192.168.39.175:30335: success! body:
Request served by hello-node-connect-7d85dfc575-8mkjr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.175:30335
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [54aeeaf1-610b-46df-aca8-d2e37e4fe509] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005155507s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-427090 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-427090 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-427090 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-427090 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-427090 apply -f testdata/storage-provisioner/pod.yaml
I1108 08:41:46.411117    9745 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4f8ba4b4-ed48-4231-b540-d7ea9db576bb] Pending
helpers_test.go:352: "sp-pod" [4f8ba4b4-ed48-4231-b540-d7ea9db576bb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4f8ba4b4-ed48-4231-b540-d7ea9db576bb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.006136983s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-427090 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-427090 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-427090 delete -f testdata/storage-provisioner/pod.yaml: (2.426564202s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-427090 apply -f testdata/storage-provisioner/pod.yaml
I1108 08:42:13.122399    9745 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3434530f-87c9-426a-8b4f-712100482d57] Pending
helpers_test.go:352: "sp-pod" [3434530f-87c9-426a-8b4f-712100482d57] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3434530f-87c9-426a-8b4f-712100482d57] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00473353s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-427090 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh -n functional-427090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cp functional-427090:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2718754243/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh -n functional-427090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh -n functional-427090 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-427090 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-kzgm6" [86424919-f28f-48b6-a0f0-73ad93258738] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-kzgm6" [86424919-f28f-48b6-a0f0-73ad93258738] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003980019s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427090 exec mysql-5bb876957f-kzgm6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427090 exec mysql-5bb876957f-kzgm6 -- mysql -ppassword -e "show databases;": exit status 1 (362.03952ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1108 08:41:59.141870    9745 retry.go:31] will retry after 1.276641861s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427090 exec mysql-5bb876957f-kzgm6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9745/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /etc/test/nested/copy/9745/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9745.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /etc/ssl/certs/9745.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9745.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /usr/share/ca-certificates/9745.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/97452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /etc/ssl/certs/97452.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/97452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /usr/share/ca-certificates/97452.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-427090 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh "sudo systemctl is-active docker": exit status 1 (152.883324ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh "sudo systemctl is-active containerd": exit status 1 (160.046197ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427090 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-427090
localhost/kicbase/echo-server:functional-427090
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427090 image ls --format short --alsologtostderr:
I1108 08:42:19.005842   16538 out.go:360] Setting OutFile to fd 1 ...
I1108 08:42:19.006080   16538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:19.006088   16538 out.go:374] Setting ErrFile to fd 2...
I1108 08:42:19.006092   16538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:19.006282   16538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
I1108 08:42:19.006766   16538 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:19.006860   16538 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:19.008791   16538 ssh_runner.go:195] Run: systemctl --version
I1108 08:42:19.010722   16538 main.go:143] libmachine: domain functional-427090 has defined MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:19.011068   16538 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:41:46", ip: ""} in network mk-functional-427090: {Iface:virbr1 ExpiryTime:2025-11-08 09:39:09 +0000 UTC Type:0 Mac:52:54:00:12:41:46 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-427090 Clientid:01:52:54:00:12:41:46}
I1108 08:42:19.011095   16538 main.go:143] libmachine: domain functional-427090 has defined IP address 192.168.39.175 and MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:19.011218   16538 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/functional-427090/id_rsa Username:docker}
I1108 08:42:19.094801   16538 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427090 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ localhost/minikube-local-cache-test     │ functional-427090  │ 8bb7589f9eea7 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-427090  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427090 image ls --format table --alsologtostderr:
I1108 08:42:21.323305   16620 out.go:360] Setting OutFile to fd 1 ...
I1108 08:42:21.323534   16620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:21.323542   16620 out.go:374] Setting ErrFile to fd 2...
I1108 08:42:21.323547   16620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:21.323759   16620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
I1108 08:42:21.324250   16620 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:21.324337   16620 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:21.326747   16620 ssh_runner.go:195] Run: systemctl --version
I1108 08:42:21.329027   16620 main.go:143] libmachine: domain functional-427090 has defined MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:21.329450   16620 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:41:46", ip: ""} in network mk-functional-427090: {Iface:virbr1 ExpiryTime:2025-11-08 09:39:09 +0000 UTC Type:0 Mac:52:54:00:12:41:46 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-427090 Clientid:01:52:54:00:12:41:46}
I1108 08:42:21.329476   16620 main.go:143] libmachine: domain functional-427090 has defined IP address 192.168.39.175 and MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:21.329643   16620 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/functional-427090/id_rsa Username:docker}
I1108 08:42:21.409582   16620 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427090 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":
["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube
-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7
eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748b
bc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"8bb7589f9eea739c25d0b797f62ef0e87c57c06f14d593f09b942cfd0266c2d5","repoDigests":["localhost/minikube-local-cache-test@sha256:84afa48359069c3c65b1b3be5c35188169dea1cc652d54507832bd580df95ecd"],"repoTags":["localhost/minikube-local-cache-test:functional-427090"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
"repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b
3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-427090"],"size":"4943877"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427090 image ls --format json --alsologtostderr:
I1108 08:42:21.132770   16609 out.go:360] Setting OutFile to fd 1 ...
I1108 08:42:21.133021   16609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:21.133030   16609 out.go:374] Setting ErrFile to fd 2...
I1108 08:42:21.133034   16609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:21.133203   16609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
I1108 08:42:21.134608   16609 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:21.134865   16609 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:21.136854   16609 ssh_runner.go:195] Run: systemctl --version
I1108 08:42:21.138786   16609 main.go:143] libmachine: domain functional-427090 has defined MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:21.139139   16609 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:41:46", ip: ""} in network mk-functional-427090: {Iface:virbr1 ExpiryTime:2025-11-08 09:39:09 +0000 UTC Type:0 Mac:52:54:00:12:41:46 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-427090 Clientid:01:52:54:00:12:41:46}
I1108 08:42:21.139161   16609 main.go:143] libmachine: domain functional-427090 has defined IP address 192.168.39.175 and MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:21.139287   16609 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/functional-427090/id_rsa Username:docker}
I1108 08:42:21.220378   16609 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427090 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-427090
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8bb7589f9eea739c25d0b797f62ef0e87c57c06f14d593f09b942cfd0266c2d5
repoDigests:
- localhost/minikube-local-cache-test@sha256:84afa48359069c3c65b1b3be5c35188169dea1cc652d54507832bd580df95ecd
repoTags:
- localhost/minikube-local-cache-test:functional-427090
size: "3330"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427090 image ls --format yaml --alsologtostderr:
I1108 08:42:19.200714   16549 out.go:360] Setting OutFile to fd 1 ...
I1108 08:42:19.200949   16549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:19.200958   16549 out.go:374] Setting ErrFile to fd 2...
I1108 08:42:19.200961   16549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:19.201132   16549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
I1108 08:42:19.201636   16549 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:19.201723   16549 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:19.203779   16549 ssh_runner.go:195] Run: systemctl --version
I1108 08:42:19.205877   16549 main.go:143] libmachine: domain functional-427090 has defined MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:19.206205   16549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:41:46", ip: ""} in network mk-functional-427090: {Iface:virbr1 ExpiryTime:2025-11-08 09:39:09 +0000 UTC Type:0 Mac:52:54:00:12:41:46 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-427090 Clientid:01:52:54:00:12:41:46}
I1108 08:42:19.206233   16549 main.go:143] libmachine: domain functional-427090 has defined IP address 192.168.39.175 and MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:19.206344   16549 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/functional-427090/id_rsa Username:docker}
I1108 08:42:19.292323   16549 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh pgrep buildkitd: exit status 1 (155.217378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image build -t localhost/my-image:functional-427090 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 image build -t localhost/my-image:functional-427090 testdata/build --alsologtostderr: (3.612804744s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427090 image build -t localhost/my-image:functional-427090 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 333e2fa9dbf
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-427090
--> 1287f5e7f33
Successfully tagged localhost/my-image:functional-427090
1287f5e7f33a5508711c085e09b8ef4529ec41012b4afa4b4c4834780aba9ab5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427090 image build -t localhost/my-image:functional-427090 testdata/build --alsologtostderr:
I1108 08:42:19.549603   16571 out.go:360] Setting OutFile to fd 1 ...
I1108 08:42:19.549738   16571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:19.549747   16571 out.go:374] Setting ErrFile to fd 2...
I1108 08:42:19.549751   16571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:42:19.549960   16571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
I1108 08:42:19.550544   16571 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:19.551110   16571 config.go:182] Loaded profile config "functional-427090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:42:19.552983   16571 ssh_runner.go:195] Run: systemctl --version
I1108 08:42:19.555069   16571 main.go:143] libmachine: domain functional-427090 has defined MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:19.555488   16571 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:41:46", ip: ""} in network mk-functional-427090: {Iface:virbr1 ExpiryTime:2025-11-08 09:39:09 +0000 UTC Type:0 Mac:52:54:00:12:41:46 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-427090 Clientid:01:52:54:00:12:41:46}
I1108 08:42:19.555536   16571 main.go:143] libmachine: domain functional-427090 has defined IP address 192.168.39.175 and MAC address 52:54:00:12:41:46 in network mk-functional-427090
I1108 08:42:19.555684   16571 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/functional-427090/id_rsa Username:docker}
I1108 08:42:19.637525   16571 build_images.go:162] Building image from path: /tmp/build.2582021027.tar
I1108 08:42:19.637610   16571 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 08:42:19.650007   16571 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2582021027.tar
I1108 08:42:19.655371   16571 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2582021027.tar: stat -c "%s %y" /var/lib/minikube/build/build.2582021027.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2582021027.tar': No such file or directory
I1108 08:42:19.655415   16571 ssh_runner.go:362] scp /tmp/build.2582021027.tar --> /var/lib/minikube/build/build.2582021027.tar (3072 bytes)
I1108 08:42:19.688457   16571 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2582021027
I1108 08:42:19.701518   16571 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2582021027 -xf /var/lib/minikube/build/build.2582021027.tar
I1108 08:42:19.713605   16571 crio.go:315] Building image: /var/lib/minikube/build/build.2582021027
I1108 08:42:19.713687   16571 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-427090 /var/lib/minikube/build/build.2582021027 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1108 08:42:23.076263   16571 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-427090 /var/lib/minikube/build/build.2582021027 --cgroup-manager=cgroupfs: (3.362539884s)
I1108 08:42:23.076367   16571 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2582021027
I1108 08:42:23.092819   16571 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2582021027.tar
I1108 08:42:23.105821   16571 build_images.go:218] Built localhost/my-image:functional-427090 from /tmp/build.2582021027.tar
I1108 08:42:23.105872   16571 build_images.go:134] succeeded building to: functional-427090
I1108 08:42:23.105879   16571 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.920330728s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-427090
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image load --daemon kicbase/echo-server:functional-427090 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 image load --daemon kicbase/echo-server:functional-427090 --alsologtostderr: (1.19225025s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdany-port3815672927/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762591300820290815" to /tmp/TestFunctionalparallelMountCmdany-port3815672927/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762591300820290815" to /tmp/TestFunctionalparallelMountCmdany-port3815672927/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762591300820290815" to /tmp/TestFunctionalparallelMountCmdany-port3815672927/001/test-1762591300820290815
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (177.240427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 08:41:40.997873    9745 retry.go:31] will retry after 392.767429ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 08:41 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 08:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 08:41 test-1762591300820290815
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh cat /mount-9p/test-1762591300820290815
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-427090 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8d48a18e-6283-4671-85d1-d1efc7f4a9e8] Pending
helpers_test.go:352: "busybox-mount" [8d48a18e-6283-4671-85d1-d1efc7f4a9e8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8d48a18e-6283-4671-85d1-d1efc7f4a9e8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8d48a18e-6283-4671-85d1-d1efc7f4a9e8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.00604057s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-427090 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh stat /mount-9p/created-by-pod
E1108 08:42:00.325850    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdany-port3815672927/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image load --daemon kicbase/echo-server:functional-427090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-427090
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image load --daemon kicbase/echo-server:functional-427090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image save kicbase/echo-server:functional-427090 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
I1108 08:41:45.153934    9745 retry.go:31] will retry after 1.04793803s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:889c2e66-287a-499e-9f1d-f7b302097daa ResourceVersion:736 Generation:0 CreationTimestamp:2025-11-08 08:41:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001905190 VolumeMode:0xc0019051a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 image save kicbase/echo-server:functional-427090 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (6.803968344s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image rm kicbase/echo-server:functional-427090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-427090
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 image save --daemon kicbase/echo-server:functional-427090 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-427090
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdspecific-port820465745/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (174.574715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 08:42:00.980586    9745 retry.go:31] will retry after 375.152127ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdspecific-port820465745/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh "sudo umount -f /mount-9p": exit status 1 (181.810717ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-427090 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdspecific-port820465745/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup471079245/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup471079245/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup471079245/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T" /mount1: exit status 1 (194.487063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 08:42:02.310951    9745 retry.go:31] will retry after 277.03393ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-427090 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup471079245/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup471079245/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup471079245/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-427090 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-427090 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-kcv5h" [5aae4ff8-d830-420b-b6e7-594abe789cd7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-kcv5h" [5aae4ff8-d830-420b-b6e7-594abe789cd7] Running
2025/11/08 08:42:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.006281205s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "258.750122ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "56.271113ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "241.404114ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "295.405423ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 service list: (1.213970523s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-427090 service list -o json: (1.187551994s)
functional_test.go:1504: Took "1.187644234s" to run "out/minikube-linux-amd64 -p functional-427090 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.175:31150
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-427090 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.175:31150
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-427090
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-427090
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-427090
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (247.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1108 08:42:28.039237    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m6.996064488s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (247.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- rollout status deployment/busybox
E1108 08:46:38.775678    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:38.782040    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:38.793399    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:38.814780    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:38.856154    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:38.937559    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:39.099111    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:39.420789    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:40.062161    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 kubectl -- rollout status deployment/busybox: (5.80919352s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-4xqmp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-85jx7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-t9kqb -- nslookup kubernetes.io
E1108 08:46:41.343647    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-4xqmp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-85jx7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-t9kqb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-4xqmp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-85jx7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-t9kqb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-4xqmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-4xqmp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-85jx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-85jx7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-t9kqb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 kubectl -- exec busybox-7b57f96db7-t9kqb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node add --alsologtostderr -v 5
E1108 08:46:43.905198    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:49.027134    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:59.269164    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:47:00.325580    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:47:19.750971    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 node add --alsologtostderr -v 5: (44.150021379s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-412754 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp testdata/cp-test.txt ha-412754:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4066556841/001/cp-test_ha-412754.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754:/home/docker/cp-test.txt ha-412754-m02:/home/docker/cp-test_ha-412754_ha-412754-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test_ha-412754_ha-412754-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754:/home/docker/cp-test.txt ha-412754-m03:/home/docker/cp-test_ha-412754_ha-412754-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test_ha-412754_ha-412754-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754:/home/docker/cp-test.txt ha-412754-m04:/home/docker/cp-test_ha-412754_ha-412754-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test_ha-412754_ha-412754-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp testdata/cp-test.txt ha-412754-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4066556841/001/cp-test_ha-412754-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m02:/home/docker/cp-test.txt ha-412754:/home/docker/cp-test_ha-412754-m02_ha-412754.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test_ha-412754-m02_ha-412754.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m02:/home/docker/cp-test.txt ha-412754-m03:/home/docker/cp-test_ha-412754-m02_ha-412754-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test_ha-412754-m02_ha-412754-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m02:/home/docker/cp-test.txt ha-412754-m04:/home/docker/cp-test_ha-412754-m02_ha-412754-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test_ha-412754-m02_ha-412754-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp testdata/cp-test.txt ha-412754-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4066556841/001/cp-test_ha-412754-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m03:/home/docker/cp-test.txt ha-412754:/home/docker/cp-test_ha-412754-m03_ha-412754.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test_ha-412754-m03_ha-412754.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m03:/home/docker/cp-test.txt ha-412754-m02:/home/docker/cp-test_ha-412754-m03_ha-412754-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test_ha-412754-m03_ha-412754-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m03:/home/docker/cp-test.txt ha-412754-m04:/home/docker/cp-test_ha-412754-m03_ha-412754-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test_ha-412754-m03_ha-412754-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp testdata/cp-test.txt ha-412754-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4066556841/001/cp-test_ha-412754-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m04:/home/docker/cp-test.txt ha-412754:/home/docker/cp-test_ha-412754-m04_ha-412754.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754 "sudo cat /home/docker/cp-test_ha-412754-m04_ha-412754.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m04:/home/docker/cp-test.txt ha-412754-m02:/home/docker/cp-test_ha-412754-m04_ha-412754-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m02 "sudo cat /home/docker/cp-test_ha-412754-m04_ha-412754-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 cp ha-412754-m04:/home/docker/cp-test.txt ha-412754-m03:/home/docker/cp-test_ha-412754-m04_ha-412754-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 ssh -n ha-412754-m03 "sudo cat /home/docker/cp-test_ha-412754-m04_ha-412754-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node stop m02 --alsologtostderr -v 5
E1108 08:48:00.713166    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 node stop m02 --alsologtostderr -v 5: (1m27.488512503s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5: exit status 7 (508.543223ms)

                                                
                                                
-- stdout --
	ha-412754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-412754-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-412754-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-412754-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:49:07.617768   19864 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:49:07.617860   19864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:49:07.617865   19864 out.go:374] Setting ErrFile to fd 2...
	I1108 08:49:07.617869   19864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:49:07.618077   19864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 08:49:07.618228   19864 out.go:368] Setting JSON to false
	I1108 08:49:07.618256   19864 mustload.go:66] Loading cluster: ha-412754
	I1108 08:49:07.618380   19864 notify.go:221] Checking for updates...
	I1108 08:49:07.619193   19864 config.go:182] Loaded profile config "ha-412754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:49:07.619220   19864 status.go:174] checking status of ha-412754 ...
	I1108 08:49:07.621746   19864 status.go:371] ha-412754 host status = "Running" (err=<nil>)
	I1108 08:49:07.621764   19864 host.go:66] Checking if "ha-412754" exists ...
	I1108 08:49:07.624290   19864 main.go:143] libmachine: domain ha-412754 has defined MAC address 52:54:00:ae:de:6c in network mk-ha-412754
	I1108 08:49:07.624705   19864 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ae:de:6c", ip: ""} in network mk-ha-412754: {Iface:virbr1 ExpiryTime:2025-11-08 09:42:42 +0000 UTC Type:0 Mac:52:54:00:ae:de:6c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-412754 Clientid:01:52:54:00:ae:de:6c}
	I1108 08:49:07.624738   19864 main.go:143] libmachine: domain ha-412754 has defined IP address 192.168.39.16 and MAC address 52:54:00:ae:de:6c in network mk-ha-412754
	I1108 08:49:07.624861   19864 host.go:66] Checking if "ha-412754" exists ...
	I1108 08:49:07.625032   19864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:49:07.627345   19864 main.go:143] libmachine: domain ha-412754 has defined MAC address 52:54:00:ae:de:6c in network mk-ha-412754
	I1108 08:49:07.627742   19864 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ae:de:6c", ip: ""} in network mk-ha-412754: {Iface:virbr1 ExpiryTime:2025-11-08 09:42:42 +0000 UTC Type:0 Mac:52:54:00:ae:de:6c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-412754 Clientid:01:52:54:00:ae:de:6c}
	I1108 08:49:07.627772   19864 main.go:143] libmachine: domain ha-412754 has defined IP address 192.168.39.16 and MAC address 52:54:00:ae:de:6c in network mk-ha-412754
	I1108 08:49:07.627931   19864 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/ha-412754/id_rsa Username:docker}
	I1108 08:49:07.718265   19864 ssh_runner.go:195] Run: systemctl --version
	I1108 08:49:07.725650   19864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:49:07.744176   19864 kubeconfig.go:125] found "ha-412754" server: "https://192.168.39.254:8443"
	I1108 08:49:07.744228   19864 api_server.go:166] Checking apiserver status ...
	I1108 08:49:07.744270   19864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:49:07.769965   19864 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup
	W1108 08:49:07.783190   19864 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 08:49:07.783242   19864 ssh_runner.go:195] Run: ls
	I1108 08:49:07.789872   19864 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1108 08:49:07.794445   19864 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1108 08:49:07.794465   19864 status.go:463] ha-412754 apiserver status = Running (err=<nil>)
	I1108 08:49:07.794511   19864 status.go:176] ha-412754 status: &{Name:ha-412754 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:49:07.794536   19864 status.go:174] checking status of ha-412754-m02 ...
	I1108 08:49:07.796133   19864 status.go:371] ha-412754-m02 host status = "Stopped" (err=<nil>)
	I1108 08:49:07.796159   19864 status.go:384] host is not running, skipping remaining checks
	I1108 08:49:07.796165   19864 status.go:176] ha-412754-m02 status: &{Name:ha-412754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:49:07.796195   19864 status.go:174] checking status of ha-412754-m03 ...
	I1108 08:49:07.797508   19864 status.go:371] ha-412754-m03 host status = "Running" (err=<nil>)
	I1108 08:49:07.797525   19864 host.go:66] Checking if "ha-412754-m03" exists ...
	I1108 08:49:07.800125   19864 main.go:143] libmachine: domain ha-412754-m03 has defined MAC address 52:54:00:6c:8f:12 in network mk-ha-412754
	I1108 08:49:07.800646   19864 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:8f:12", ip: ""} in network mk-ha-412754: {Iface:virbr1 ExpiryTime:2025-11-08 09:44:55 +0000 UTC Type:0 Mac:52:54:00:6c:8f:12 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-412754-m03 Clientid:01:52:54:00:6c:8f:12}
	I1108 08:49:07.800669   19864 main.go:143] libmachine: domain ha-412754-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:6c:8f:12 in network mk-ha-412754
	I1108 08:49:07.800819   19864 host.go:66] Checking if "ha-412754-m03" exists ...
	I1108 08:49:07.800992   19864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:49:07.802951   19864 main.go:143] libmachine: domain ha-412754-m03 has defined MAC address 52:54:00:6c:8f:12 in network mk-ha-412754
	I1108 08:49:07.803304   19864 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:8f:12", ip: ""} in network mk-ha-412754: {Iface:virbr1 ExpiryTime:2025-11-08 09:44:55 +0000 UTC Type:0 Mac:52:54:00:6c:8f:12 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-412754-m03 Clientid:01:52:54:00:6c:8f:12}
	I1108 08:49:07.803324   19864 main.go:143] libmachine: domain ha-412754-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:6c:8f:12 in network mk-ha-412754
	I1108 08:49:07.803489   19864 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/ha-412754-m03/id_rsa Username:docker}
	I1108 08:49:07.892072   19864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:49:07.909797   19864 kubeconfig.go:125] found "ha-412754" server: "https://192.168.39.254:8443"
	I1108 08:49:07.909829   19864 api_server.go:166] Checking apiserver status ...
	I1108 08:49:07.909876   19864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:49:07.930644   19864 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1761/cgroup
	W1108 08:49:07.942242   19864 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1761/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 08:49:07.942305   19864 ssh_runner.go:195] Run: ls
	I1108 08:49:07.947295   19864 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1108 08:49:07.952244   19864 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1108 08:49:07.952271   19864 status.go:463] ha-412754-m03 apiserver status = Running (err=<nil>)
	I1108 08:49:07.952283   19864 status.go:176] ha-412754-m03 status: &{Name:ha-412754-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:49:07.952300   19864 status.go:174] checking status of ha-412754-m04 ...
	I1108 08:49:07.953732   19864 status.go:371] ha-412754-m04 host status = "Running" (err=<nil>)
	I1108 08:49:07.953751   19864 host.go:66] Checking if "ha-412754-m04" exists ...
	I1108 08:49:07.956179   19864 main.go:143] libmachine: domain ha-412754-m04 has defined MAC address 52:54:00:6a:00:38 in network mk-ha-412754
	I1108 08:49:07.956581   19864 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:00:38", ip: ""} in network mk-ha-412754: {Iface:virbr1 ExpiryTime:2025-11-08 09:47:00 +0000 UTC Type:0 Mac:52:54:00:6a:00:38 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-412754-m04 Clientid:01:52:54:00:6a:00:38}
	I1108 08:49:07.956604   19864 main.go:143] libmachine: domain ha-412754-m04 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:00:38 in network mk-ha-412754
	I1108 08:49:07.956724   19864 host.go:66] Checking if "ha-412754-m04" exists ...
	I1108 08:49:07.956893   19864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:49:07.958870   19864 main.go:143] libmachine: domain ha-412754-m04 has defined MAC address 52:54:00:6a:00:38 in network mk-ha-412754
	I1108 08:49:07.959266   19864 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:00:38", ip: ""} in network mk-ha-412754: {Iface:virbr1 ExpiryTime:2025-11-08 09:47:00 +0000 UTC Type:0 Mac:52:54:00:6a:00:38 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-412754-m04 Clientid:01:52:54:00:6a:00:38}
	I1108 08:49:07.959304   19864 main.go:143] libmachine: domain ha-412754-m04 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:00:38 in network mk-ha-412754
	I1108 08:49:07.959492   19864 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/ha-412754-m04/id_rsa Username:docker}
	I1108 08:49:08.048317   19864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:49:08.070422   19864 status.go:176] ha-412754-m04 status: &{Name:ha-412754-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node start m02 --alsologtostderr -v 5
E1108 08:49:22.634585    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 node start m02 --alsologtostderr -v 5: (42.687519126s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (367.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 stop --alsologtostderr -v 5
E1108 08:51:38.776002    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:52:00.326166    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:52:06.476729    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:53:23.403111    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 stop --alsologtostderr -v 5: (3m58.127530524s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 start --wait true --alsologtostderr -v 5: (2m8.838755588s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (367.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 node delete m03 --alsologtostderr -v 5: (17.567708533s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (253.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 stop --alsologtostderr -v 5
E1108 08:56:38.778081    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:57:00.326267    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 stop --alsologtostderr -v 5: (4m13.546756977s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5: exit status 7 (62.977343ms)

                                                
                                                
-- stdout --
	ha-412754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-412754-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-412754-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:00:32.295155   23044 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:00:32.295408   23044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:00:32.295417   23044 out.go:374] Setting ErrFile to fd 2...
	I1108 09:00:32.295420   23044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:00:32.295639   23044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:00:32.295819   23044 out.go:368] Setting JSON to false
	I1108 09:00:32.295847   23044 mustload.go:66] Loading cluster: ha-412754
	I1108 09:00:32.295904   23044 notify.go:221] Checking for updates...
	I1108 09:00:32.296171   23044 config.go:182] Loaded profile config "ha-412754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:00:32.296183   23044 status.go:174] checking status of ha-412754 ...
	I1108 09:00:32.298399   23044 status.go:371] ha-412754 host status = "Stopped" (err=<nil>)
	I1108 09:00:32.298419   23044 status.go:384] host is not running, skipping remaining checks
	I1108 09:00:32.298427   23044 status.go:176] ha-412754 status: &{Name:ha-412754 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:00:32.298471   23044 status.go:174] checking status of ha-412754-m02 ...
	I1108 09:00:32.299884   23044 status.go:371] ha-412754-m02 host status = "Stopped" (err=<nil>)
	I1108 09:00:32.299900   23044 status.go:384] host is not running, skipping remaining checks
	I1108 09:00:32.299906   23044 status.go:176] ha-412754-m02 status: &{Name:ha-412754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:00:32.299921   23044 status.go:174] checking status of ha-412754-m04 ...
	I1108 09:00:32.301211   23044 status.go:371] ha-412754-m04 host status = "Stopped" (err=<nil>)
	I1108 09:00:32.301229   23044 status.go:384] host is not running, skipping remaining checks
	I1108 09:00:32.301235   23044 status.go:176] ha-412754-m04 status: &{Name:ha-412754-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (253.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1108 09:01:38.777255    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:02:00.325290    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m43.387691361s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 node add --control-plane --alsologtostderr -v 5
E1108 09:03:01.838251    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-412754 node add --control-plane --alsologtostderr -v 5: (1m31.207981818s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-412754 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (91.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-372178 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-372178 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.461694956s)
--- PASS: TestJSONOutput/start/Command (83.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-372178 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-372178 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.26s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-372178 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-372178 --output=json --user=testUser: (7.258900236s)
--- PASS: TestJSONOutput/stop/Command (7.26s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-273911 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-273911 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.492932ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ece7ad43-d5e6-4a1b-ae81-84bc6f1de00b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-273911] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f8cc460-f8e0-4abe-81bd-1f145366595c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21866"}}
	{"specversion":"1.0","id":"9225e82c-dd3d-47cf-ba48-d6b62d12002b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"85053693-42e7-46e4-bc83-344172daaccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig"}}
	{"specversion":"1.0","id":"f3e6eaef-1735-44a6-82ba-66e063e9bf06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube"}}
	{"specversion":"1.0","id":"603d7500-2315-4cef-a26f-b0a2436757b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bea4568a-6ba2-4b86-b9a0-c2a63f18cab2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f30dad0-b324-43c0-9e61-e0ad78d15e10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-273911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-273911
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (88.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-834562 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-834562 --driver=kvm2  --container-runtime=crio: (43.234306775s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-837026 --driver=kvm2  --container-runtime=crio
E1108 09:06:38.775759    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-837026 --driver=kvm2  --container-runtime=crio: (43.163490366s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-834562
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-837026
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-837026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-837026
helpers_test.go:175: Cleaning up "first-834562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-834562
--- PASS: TestMinikubeProfile (88.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-445026 --memory=3072 --mount-string /tmp/TestMountStartserial2912044720/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1108 09:07:00.324932    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-445026 --memory=3072 --mount-string /tmp/TestMountStartserial2912044720/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.94992432s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-445026 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-445026 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-462793 --memory=3072 --mount-string /tmp/TestMountStartserial2912044720/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-462793 --memory=3072 --mount-string /tmp/TestMountStartserial2912044720/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.657846723s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462793 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462793 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-445026 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462793 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462793 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-462793
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-462793: (1.261574159s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-462793
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-462793: (20.311362645s)
--- PASS: TestMountStart/serial/RestartStopped (21.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462793 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-462793 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041614 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041614 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.613143339s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-041614 -- rollout status deployment/busybox: (4.828675861s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-b22jq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-t9qqv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-b22jq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-t9qqv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-b22jq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-t9qqv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-b22jq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-b22jq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-t9qqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041614 -- exec busybox-7b57f96db7-t9qqv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-041614 -v=5 --alsologtostderr
E1108 09:10:03.405214    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-041614 -v=5 --alsologtostderr: (47.953961513s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-041614 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp testdata/cp-test.txt multinode-041614:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2229738502/001/cp-test_multinode-041614.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614:/home/docker/cp-test.txt multinode-041614-m02:/home/docker/cp-test_multinode-041614_multinode-041614-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m02 "sudo cat /home/docker/cp-test_multinode-041614_multinode-041614-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614:/home/docker/cp-test.txt multinode-041614-m03:/home/docker/cp-test_multinode-041614_multinode-041614-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m03 "sudo cat /home/docker/cp-test_multinode-041614_multinode-041614-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp testdata/cp-test.txt multinode-041614-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2229738502/001/cp-test_multinode-041614-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614-m02:/home/docker/cp-test.txt multinode-041614:/home/docker/cp-test_multinode-041614-m02_multinode-041614.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614 "sudo cat /home/docker/cp-test_multinode-041614-m02_multinode-041614.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614-m02:/home/docker/cp-test.txt multinode-041614-m03:/home/docker/cp-test_multinode-041614-m02_multinode-041614-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m03 "sudo cat /home/docker/cp-test_multinode-041614-m02_multinode-041614-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp testdata/cp-test.txt multinode-041614-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2229738502/001/cp-test_multinode-041614-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614-m03:/home/docker/cp-test.txt multinode-041614:/home/docker/cp-test_multinode-041614-m03_multinode-041614.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614 "sudo cat /home/docker/cp-test_multinode-041614-m03_multinode-041614.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 cp multinode-041614-m03:/home/docker/cp-test.txt multinode-041614-m02:/home/docker/cp-test_multinode-041614-m03_multinode-041614-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 ssh -n multinode-041614-m02 "sudo cat /home/docker/cp-test_multinode-041614-m03_multinode-041614-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-041614 node stop m03: (1.64225918s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041614 status: exit status 7 (322.614139ms)

                                                
                                                
-- stdout --
	multinode-041614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-041614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-041614-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr: exit status 7 (329.948246ms)

                                                
                                                
-- stdout --
	multinode-041614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-041614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-041614-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:10:51.188934   28741 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:10:51.189170   28741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:10:51.189178   28741 out.go:374] Setting ErrFile to fd 2...
	I1108 09:10:51.189182   28741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:10:51.189373   28741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:10:51.189550   28741 out.go:368] Setting JSON to false
	I1108 09:10:51.189574   28741 mustload.go:66] Loading cluster: multinode-041614
	I1108 09:10:51.189662   28741 notify.go:221] Checking for updates...
	I1108 09:10:51.189977   28741 config.go:182] Loaded profile config "multinode-041614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:51.190002   28741 status.go:174] checking status of multinode-041614 ...
	I1108 09:10:51.192106   28741 status.go:371] multinode-041614 host status = "Running" (err=<nil>)
	I1108 09:10:51.192121   28741 host.go:66] Checking if "multinode-041614" exists ...
	I1108 09:10:51.194417   28741 main.go:143] libmachine: domain multinode-041614 has defined MAC address 52:54:00:74:64:c9 in network mk-multinode-041614
	I1108 09:10:51.194806   28741 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:64:c9", ip: ""} in network mk-multinode-041614: {Iface:virbr1 ExpiryTime:2025-11-08 10:08:23 +0000 UTC Type:0 Mac:52:54:00:74:64:c9 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-041614 Clientid:01:52:54:00:74:64:c9}
	I1108 09:10:51.194833   28741 main.go:143] libmachine: domain multinode-041614 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:64:c9 in network mk-multinode-041614
	I1108 09:10:51.194989   28741 host.go:66] Checking if "multinode-041614" exists ...
	I1108 09:10:51.195205   28741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:10:51.197185   28741 main.go:143] libmachine: domain multinode-041614 has defined MAC address 52:54:00:74:64:c9 in network mk-multinode-041614
	I1108 09:10:51.197540   28741 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:64:c9", ip: ""} in network mk-multinode-041614: {Iface:virbr1 ExpiryTime:2025-11-08 10:08:23 +0000 UTC Type:0 Mac:52:54:00:74:64:c9 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-041614 Clientid:01:52:54:00:74:64:c9}
	I1108 09:10:51.197573   28741 main.go:143] libmachine: domain multinode-041614 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:64:c9 in network mk-multinode-041614
	I1108 09:10:51.197699   28741 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/multinode-041614/id_rsa Username:docker}
	I1108 09:10:51.285290   28741 ssh_runner.go:195] Run: systemctl --version
	I1108 09:10:51.291568   28741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:10:51.308465   28741 kubeconfig.go:125] found "multinode-041614" server: "https://192.168.39.173:8443"
	I1108 09:10:51.308514   28741 api_server.go:166] Checking apiserver status ...
	I1108 09:10:51.308554   28741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:10:51.328489   28741 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	W1108 09:10:51.339685   28741 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:10:51.339737   28741 ssh_runner.go:195] Run: ls
	I1108 09:10:51.344778   28741 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1108 09:10:51.349420   28741 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I1108 09:10:51.349450   28741 status.go:463] multinode-041614 apiserver status = Running (err=<nil>)
	I1108 09:10:51.349463   28741 status.go:176] multinode-041614 status: &{Name:multinode-041614 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:10:51.349483   28741 status.go:174] checking status of multinode-041614-m02 ...
	I1108 09:10:51.351118   28741 status.go:371] multinode-041614-m02 host status = "Running" (err=<nil>)
	I1108 09:10:51.351138   28741 host.go:66] Checking if "multinode-041614-m02" exists ...
	I1108 09:10:51.353318   28741 main.go:143] libmachine: domain multinode-041614-m02 has defined MAC address 52:54:00:9b:c2:03 in network mk-multinode-041614
	I1108 09:10:51.353719   28741 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:c2:03", ip: ""} in network mk-multinode-041614: {Iface:virbr1 ExpiryTime:2025-11-08 10:09:15 +0000 UTC Type:0 Mac:52:54:00:9b:c2:03 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-041614-m02 Clientid:01:52:54:00:9b:c2:03}
	I1108 09:10:51.353741   28741 main.go:143] libmachine: domain multinode-041614-m02 has defined IP address 192.168.39.156 and MAC address 52:54:00:9b:c2:03 in network mk-multinode-041614
	I1108 09:10:51.353859   28741 host.go:66] Checking if "multinode-041614-m02" exists ...
	I1108 09:10:51.354049   28741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:10:51.356489   28741 main.go:143] libmachine: domain multinode-041614-m02 has defined MAC address 52:54:00:9b:c2:03 in network mk-multinode-041614
	I1108 09:10:51.356907   28741 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:c2:03", ip: ""} in network mk-multinode-041614: {Iface:virbr1 ExpiryTime:2025-11-08 10:09:15 +0000 UTC Type:0 Mac:52:54:00:9b:c2:03 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-041614-m02 Clientid:01:52:54:00:9b:c2:03}
	I1108 09:10:51.356931   28741 main.go:143] libmachine: domain multinode-041614-m02 has defined IP address 192.168.39.156 and MAC address 52:54:00:9b:c2:03 in network mk-multinode-041614
	I1108 09:10:51.357084   28741 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21866-5845/.minikube/machines/multinode-041614-m02/id_rsa Username:docker}
	I1108 09:10:51.442451   28741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:10:51.460667   28741 status.go:176] multinode-041614-m02 status: &{Name:multinode-041614-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:10:51.460702   28741 status.go:174] checking status of multinode-041614-m03 ...
	I1108 09:10:51.462347   28741 status.go:371] multinode-041614-m03 host status = "Stopped" (err=<nil>)
	I1108 09:10:51.462367   28741 status.go:384] host is not running, skipping remaining checks
	I1108 09:10:51.462374   28741 status.go:176] multinode-041614-m03 status: &{Name:multinode-041614-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-041614 node start m03 -v=5 --alsologtostderr: (40.039077729s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-041614
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-041614
E1108 09:11:38.776884    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:12:00.326205    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-041614: (2m55.253078533s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041614 --wait=true -v=5 --alsologtostderr
E1108 09:16:38.775964    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041614 --wait=true -v=5 --alsologtostderr: (2m13.465356661s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-041614
--- PASS: TestMultiNode/serial/RestartKeepsNodes (308.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-041614 node delete m03: (2.182651129s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (165.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 stop
E1108 09:17:00.325397    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-041614 stop: (2m45.786479599s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041614 status: exit status 7 (59.548513ms)

                                                
                                                
-- stdout --
	multinode-041614
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-041614-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr: exit status 7 (58.966209ms)

                                                
                                                
-- stdout --
	multinode-041614
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-041614-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:19:29.395146   31560 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:19:29.395574   31560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:19:29.395582   31560 out.go:374] Setting ErrFile to fd 2...
	I1108 09:19:29.395586   31560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:19:29.395742   31560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:19:29.395879   31560 out.go:368] Setting JSON to false
	I1108 09:19:29.395905   31560 mustload.go:66] Loading cluster: multinode-041614
	I1108 09:19:29.395990   31560 notify.go:221] Checking for updates...
	I1108 09:19:29.396306   31560 config.go:182] Loaded profile config "multinode-041614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:19:29.396322   31560 status.go:174] checking status of multinode-041614 ...
	I1108 09:19:29.398512   31560 status.go:371] multinode-041614 host status = "Stopped" (err=<nil>)
	I1108 09:19:29.398529   31560 status.go:384] host is not running, skipping remaining checks
	I1108 09:19:29.398535   31560 status.go:176] multinode-041614 status: &{Name:multinode-041614 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:19:29.398566   31560 status.go:174] checking status of multinode-041614-m02 ...
	I1108 09:19:29.399697   31560 status.go:371] multinode-041614-m02 host status = "Stopped" (err=<nil>)
	I1108 09:19:29.399710   31560 status.go:384] host is not running, skipping remaining checks
	I1108 09:19:29.399714   31560 status.go:176] multinode-041614-m02 status: &{Name:multinode-041614-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (165.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041614 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1108 09:19:41.841081    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041614 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m29.017844101s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041614 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-041614
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041614-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-041614-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.234102ms)

                                                
                                                
-- stdout --
	* [multinode-041614-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-041614-m02' is duplicated with machine name 'multinode-041614-m02' in profile 'multinode-041614'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041614-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041614-m03 --driver=kvm2  --container-runtime=crio: (38.839419377s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-041614
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-041614: exit status 80 (195.223071ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-041614 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-041614-m03 already exists in multinode-041614-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-041614-m03
E1108 09:21:38.776608    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.97s)

                                                
                                    
x
+
TestScheduledStopUnix (110.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-578347 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-578347 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.90727702s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578347 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-578347 -n scheduled-stop-578347
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578347 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1108 09:25:05.193976    9745 retry.go:31] will retry after 80.963µs: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.195174    9745 retry.go:31] will retry after 81.921µs: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.196360    9745 retry.go:31] will retry after 157.364µs: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.197491    9745 retry.go:31] will retry after 279.828µs: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.198638    9745 retry.go:31] will retry after 389.402µs: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.199766    9745 retry.go:31] will retry after 1.012713ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.200905    9745 retry.go:31] will retry after 702.402µs: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.202040    9745 retry.go:31] will retry after 1.128088ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.204245    9745 retry.go:31] will retry after 2.659592ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.207436    9745 retry.go:31] will retry after 3.173482ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.211637    9745 retry.go:31] will retry after 4.324413ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.216857    9745 retry.go:31] will retry after 9.383183ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.227118    9745 retry.go:31] will retry after 15.08644ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.242341    9745 retry.go:31] will retry after 28.779454ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.271637    9745 retry.go:31] will retry after 20.779832ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
I1108 09:25:05.292987    9745 retry.go:31] will retry after 39.59242ms: open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/scheduled-stop-578347/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578347 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-578347 -n scheduled-stop-578347
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-578347
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578347 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-578347
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-578347: exit status 7 (56.79229ms)

                                                
                                                
-- stdout --
	scheduled-stop-578347
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-578347 -n scheduled-stop-578347
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-578347 -n scheduled-stop-578347: exit status 7 (56.683294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-578347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-578347
--- PASS: TestScheduledStopUnix (110.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (139.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.936473735 start -p running-upgrade-727414 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.936473735 start -p running-upgrade-727414 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m13.984686198s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-727414 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-727414 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.482066067s)
helpers_test.go:175: Cleaning up "running-upgrade-727414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-727414
--- PASS: TestRunningBinaryUpgrade (139.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (174.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.82309046s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-914955
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-914955: (2.659358697s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-914955 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-914955 status --format={{.Host}}: exit status 7 (68.319675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.519202663s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-914955 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.070835ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-914955] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-914955
	    minikube start -p kubernetes-upgrade-914955 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9149552 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-914955 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914955 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.849756775s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-914955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-914955
--- PASS: TestKubernetesUpgrade (174.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-255549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-255549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (90.54976ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-255549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-255549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1108 09:26:38.776128    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:26:43.407140    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-255549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m41.438533111s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-255549 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-255549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-255549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (5.466035216s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-255549 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-255549 status -o json: exit status 2 (208.110991ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-255549","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-255549
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-255549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-255549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (25.631945132s)
--- PASS: TestNoKubernetes/serial/Start (25.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-615410 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-615410 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.855393ms)

                                                
                                                
-- stdout --
	* [false-615410] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:28:22.107608   36852 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:28:22.107704   36852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:28:22.107711   36852 out.go:374] Setting ErrFile to fd 2...
	I1108 09:28:22.107718   36852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:28:22.107889   36852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5845/.minikube/bin
	I1108 09:28:22.108438   36852 out.go:368] Setting JSON to false
	I1108 09:28:22.109706   36852 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4243,"bootTime":1762589859,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:28:22.109768   36852 start.go:143] virtualization: kvm guest
	I1108 09:28:22.111466   36852 out.go:179] * [false-615410] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:28:22.112538   36852 notify.go:221] Checking for updates...
	I1108 09:28:22.112781   36852 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:28:22.114691   36852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:28:22.115778   36852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5845/kubeconfig
	I1108 09:28:22.116770   36852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5845/.minikube
	I1108 09:28:22.117675   36852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:28:22.118674   36852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:28:22.120079   36852 config.go:182] Loaded profile config "NoKubernetes-255549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 09:28:22.120178   36852 config.go:182] Loaded profile config "cert-expiration-349612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:28:22.120277   36852 config.go:182] Loaded profile config "cert-options-476448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:28:22.120369   36852 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:28:22.153446   36852 out.go:179] * Using the kvm2 driver based on user configuration
	I1108 09:28:22.154326   36852 start.go:309] selected driver: kvm2
	I1108 09:28:22.154341   36852 start.go:930] validating driver "kvm2" against <nil>
	I1108 09:28:22.154354   36852 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:28:22.155872   36852 out.go:203] 
	W1108 09:28:22.156848   36852 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 09:28:22.157806   36852 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-615410 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-615410" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.93:8443
name: cert-expiration-349612
contexts:
- context:
cluster: cert-expiration-349612
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-349612
name: cert-expiration-349612
current-context: ""
kind: Config
users:
- name: cert-expiration-349612
user:
client-certificate: /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/cert-expiration-349612/client.crt
client-key: /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/cert-expiration-349612/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-615410

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-615410"

                                                
                                                
----------------------- debugLogs end: false-615410 [took: 2.966307687s] --------------------------------
helpers_test.go:175: Cleaning up "false-615410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-615410
--- PASS: TestNetworkPlugins/group/false (3.22s)

                                                
                                    
x
+
TestISOImage/Setup (40.76s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-788314 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-788314 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.760647913s)
--- PASS: TestISOImage/Setup (40.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-255549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-255549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (157.83306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-255549
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-255549: (1.334322522s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (55.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-255549 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-255549 --driver=kvm2  --container-runtime=crio: (55.548642886s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (55.55s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-255549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-255549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (169.436048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1017706133 start -p stopped-upgrade-904490 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1017706133 start -p stopped-upgrade-904490 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m14.12065323s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1017706133 -p stopped-upgrade-904490 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1017706133 -p stopped-upgrade-904490 stop: (1.924420158s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-904490 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-904490 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.976555199s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.02s)

                                                
                                    
x
+
TestPause/serial/Start (105.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-022459 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-022459 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.48765448s)
--- PASS: TestPause/serial/Start (105.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m10.481775654s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-904490
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-904490: (1.102720023s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1108 09:31:38.776647    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:00.325017    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.357158373s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m33.556275379s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-615410 "pgrep -a kubelet"
I1108 09:32:39.417973    9745 config.go:182] Loaded profile config "auto-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t5zgq" [359ded85-a6a9-4315-95e2-22ffd385d652] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t5zgq" [359ded85-a6a9-4315-95e2-22ffd385d652] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004961257s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mfjs7" [ae01e21e-7f81-4639-acaa-a1bef20af8f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00553309s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.445025285s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-615410 "pgrep -a kubelet"
I1108 09:33:10.929354    9745 config.go:182] Loaded profile config "kindnet-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bcn2f" [a25976f6-cc11-49de-a602-d156652fdfa0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bcn2f" [a25976f6-cc11-49de-a602-d156652fdfa0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004789481s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m32.87821234s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-q84bm" [fd889824-a095-4800-8005-7809dcc1f76b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006016079s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m20.922586822s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-615410 "pgrep -a kubelet"
I1108 09:33:43.486437    9745 config.go:182] Loaded profile config "calico-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b5zrk" [96f8daf3-e9f8-440c-84dd-a62effd57f2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b5zrk" [96f8daf3-e9f8-440c-84dd-a62effd57f2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004620244s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-615410 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m21.156241359s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-615410 "pgrep -a kubelet"
I1108 09:34:20.612384    9745 config.go:182] Loaded profile config "custom-flannel-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-478xp" [943d21f3-060d-4b5c-ba30-3e7d49c41bf5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-478xp" [943d21f3-060d-4b5c-ba30-3e7d49c41bf5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.126451095s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-915044 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-915044 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m36.018067458s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-615410 "pgrep -a kubelet"
I1108 09:35:01.295630    9745 config.go:182] Loaded profile config "bridge-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rbt57" [c4153af2-cc84-4b42-9ad8-4d804e174f85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rbt57" [c4153af2-cc84-4b42-9ad8-4d804e174f85] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006709365s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-615410 "pgrep -a kubelet"
I1108 09:35:09.719317    9745 config.go:182] Loaded profile config "enable-default-cni-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w46q9" [82e4d9ff-ddff-4cd0-a46c-cb1b090c9ee7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w46q9" [82e4d9ff-ddff-4cd0-a46c-cb1b090c9ee7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005139683s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-661900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-661900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m44.160540198s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-mhxhf" [417526dc-b597-4b08-ad21-2a73bc9935ae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004442944s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-937503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-937503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m37.829893764s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-615410 "pgrep -a kubelet"
I1108 09:35:41.278236    9745 config.go:182] Loaded profile config "flannel-615410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-615410 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lxm7n" [217ddfe4-348d-4d43-ab73-83d6d96a42be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lxm7n" [217ddfe4-348d-4d43-ab73-83d6d96a42be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003864374s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-615410 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-615410 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-670955 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 09:36:21.843431    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-670955 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.861380153s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-915044 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ee55c4fc-c2c6-45ca-8a4e-c5790f771306] Pending
helpers_test.go:352: "busybox" [ee55c4fc-c2c6-45ca-8a4e-c5790f771306] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ee55c4fc-c2c6-45ca-8a4e-c5790f771306] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00721225s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-915044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-915044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-915044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.583457305s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-915044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-915044 --alsologtostderr -v=3
E1108 09:36:38.775725    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/functional-427090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:00.325261    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/addons-982714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-915044 --alsologtostderr -v=3: (1m26.169658951s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-661900 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ad385c1c-c1b8-47be-9261-6f4d23b29796] Pending
helpers_test.go:352: "busybox" [ad385c1c-c1b8-47be-9261-6f4d23b29796] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ad385c1c-c1b8-47be-9261-6f4d23b29796] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004483848s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-661900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-937503 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d6877bfd-b1ac-4c03-9504-e48b90c52d31] Pending
helpers_test.go:352: "busybox" [d6877bfd-b1ac-4c03-9504-e48b90c52d31] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d6877bfd-b1ac-4c03-9504-e48b90c52d31] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00396943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-937503 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-661900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-661900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-937503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-937503 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-661900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-661900 --alsologtostderr -v=3: (1m21.003575071s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (81.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (86.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-937503 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-937503 --alsologtostderr -v=3: (1m26.230417996s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (86.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-670955 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [01d875ed-4364-4f48-bc26-e52f4432fc74] Pending
helpers_test.go:352: "busybox" [01d875ed-4364-4f48-bc26-e52f4432fc74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1108 09:37:39.671621    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:39.677978    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:39.689305    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:39.710596    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:39.751945    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:39.833398    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [01d875ed-4364-4f48-bc26-e52f4432fc74] Running
E1108 09:37:39.995348    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:40.316666    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:40.958376    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:42.240560    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:37:44.802260    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004362666s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-670955 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-670955 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-670955 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (86.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-670955 --alsologtostderr -v=3
E1108 09:37:49.924107    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:00.165826    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-670955 --alsologtostderr -v=3: (1m26.11105488s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (86.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-915044 -n old-k8s-version-915044
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-915044 -n old-k8s-version-915044: exit status 7 (59.904587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-915044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-915044 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1108 09:38:04.704754    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:04.711152    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:04.722565    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:04.743995    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:04.785445    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:04.867088    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:05.028818    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:05.350962    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:05.992723    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:07.274113    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:09.836078    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:14.958197    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:20.647656    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:25.199585    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.294327    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.300703    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.312168    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.333554    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.374965    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.457077    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.619322    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:37.941223    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:38.583307    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:39.865194    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:42.426715    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:38:45.681223    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-915044 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (46.432099637s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-915044 -n old-k8s-version-915044
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-661900 -n no-preload-661900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-661900 -n no-preload-661900: exit status 7 (60.621826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-661900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (62.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-661900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 09:38:47.548031    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-661900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.658043829s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-661900 -n no-preload-661900
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (62.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gdv58" [c91e3ee6-c608-4cc0-94e7-6f2107c367d7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gdv58" [c91e3ee6-c608-4cc0-94e7-6f2107c367d7] Running
E1108 09:38:57.789385    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004036606s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-937503 -n embed-certs-937503
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-937503 -n embed-certs-937503: exit status 7 (70.176813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-937503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-937503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-937503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.61190324s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-937503 -n embed-certs-937503
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (60.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gdv58" [c91e3ee6-c608-4cc0-94e7-6f2107c367d7] Running
E1108 09:39:01.609869    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004905363s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-915044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-915044 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-915044 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-915044 --alsologtostderr -v=1: (1.025445467s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-915044 -n old-k8s-version-915044
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-915044 -n old-k8s-version-915044: exit status 2 (263.04672ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-915044 -n old-k8s-version-915044
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-915044 -n old-k8s-version-915044: exit status 2 (251.882614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-915044 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-915044 -n old-k8s-version-915044
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-915044 -n old-k8s-version-915044
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (67.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-469876 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-469876 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.262725311s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (67.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955: exit status 7 (60.699448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-670955 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (83.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-670955 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 09:39:18.271654    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:20.824536    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:20.830956    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:20.842320    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:20.863782    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:20.905239    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:20.986659    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:21.148224    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:21.469998    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:22.111524    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:23.393611    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:25.955190    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:26.643414    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:31.076988    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:39:41.318313    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-670955 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.757783497s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (83.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-72qwm" [b142d922-7a47-49df-95e6-503cb40bafcd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.044757192s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k6zmx" [94bfffd6-c1a7-4083-bfb0-856d0c32e5ac] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k6zmx" [94bfffd6-c1a7-4083-bfb0-856d0c32e5ac] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004800452s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-72qwm" [b142d922-7a47-49df-95e6-503cb40bafcd] Running
E1108 09:39:59.233770    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/calico-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006344213s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-661900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-661900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-661900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-661900 --alsologtostderr -v=1: (1.118446131s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-661900 -n no-preload-661900
E1108 09:40:01.572254    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:01.578875    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-661900 -n no-preload-661900: exit status 2 (296.515267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-661900 -n no-preload-661900
E1108 09:40:01.590155    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:01.611671    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:01.653360    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:01.735308    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:01.799870    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-661900 -n no-preload-661900: exit status 2 (282.244455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-661900 --alsologtostderr -v=1
E1108 09:40:01.897580    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:02.220140    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:02.862347    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-661900 --alsologtostderr -v=1: (1.168703031s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-661900 -n no-preload-661900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-661900 -n no-preload-661900
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.64s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 820bf516181cabed83ba2b27d39e21b2adf01240
iso_test.go:118:   iso_version: v1.37.0-1762018871-21834
iso_test.go:118:   kicbase_version: v0.0.48-1760939008-21773
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-788314 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
E1108 09:40:06.706409    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1108 09:40:09.958375    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:09.964840    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:09.977019    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:09.998506    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:10.040009    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:10.121565    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:10.283156    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:10.604679    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:11.246767    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:11.828927    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k6zmx" [94bfffd6-c1a7-4083-bfb0-856d0c32e5ac] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004970916s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-937503 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-937503 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-937503 --alsologtostderr -v=1
E1108 09:40:12.528425    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-937503 -n embed-certs-937503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-937503 -n embed-certs-937503: exit status 2 (245.970818ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-937503 -n embed-certs-937503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-937503 -n embed-certs-937503: exit status 2 (230.932466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-937503 --alsologtostderr -v=1
E1108 09:40:15.090387    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-937503 --alsologtostderr -v=1: (1.481735624s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-937503 -n embed-certs-937503
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-937503 -n embed-certs-937503
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-469876 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-469876 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054303196s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-469876 --alsologtostderr -v=3
E1108 09:40:20.212332    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:22.070639    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:23.531746    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/auto-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-469876 --alsologtostderr -v=3: (10.855682381s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-469876 -n newest-cni-469876
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-469876 -n newest-cni-469876: exit status 7 (56.705152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-469876 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-469876 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 09:40:30.454394    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.076432    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.082845    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.094309    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.115772    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.157159    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.238648    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.400151    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:35.721761    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-469876 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (34.946879891s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-469876 -n newest-cni-469876
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24zh6" [caceb3ce-3a7a-4641-8420-0728e8f8aca0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1108 09:40:36.363379    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:37.645098    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24zh6" [caceb3ce-3a7a-4641-8420-0728e8f8aca0] Running
E1108 09:40:40.207263    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:42.552317    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/bridge-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:42.762056    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/custom-flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004457245s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24zh6" [caceb3ce-3a7a-4641-8420-0728e8f8aca0] Running
E1108 09:40:45.328986    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/flannel-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:40:48.564998    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/kindnet-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003558293s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-670955 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-670955 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-670955 --alsologtostderr -v=1
E1108 09:40:50.936561    9745 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/enable-default-cni-615410/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-670955 --alsologtostderr -v=1: (1.027436574s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955: exit status 2 (221.488093ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955: exit status 2 (228.281181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-670955 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-670955 -n default-k8s-diff-port-670955
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-469876 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-469876 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-469876 -n newest-cni-469876
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-469876 -n newest-cni-469876: exit status 2 (210.859267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-469876 -n newest-cni-469876
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-469876 -n newest-cni-469876: exit status 2 (206.514664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-469876 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-469876 -n newest-cni-469876
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-469876 -n newest-cni-469876
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                    

Test skip (40/344)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 3.28
269 TestNetworkPlugins/group/cilium 3.63
295 TestStartStop/group/disable-driver-mounts 0.28
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982714 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-615410 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-615410" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.93:8443
name: cert-expiration-349612
contexts:
- context:
cluster: cert-expiration-349612
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-349612
name: cert-expiration-349612
current-context: ""
kind: Config
users:
- name: cert-expiration-349612
user:
client-certificate: /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/cert-expiration-349612/client.crt
client-key: /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/cert-expiration-349612/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-615410

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-615410"

                                                
                                                
----------------------- debugLogs end: kubenet-615410 [took: 3.135564221s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-615410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-615410
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-615410 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-615410" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5845/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.93:8443
name: cert-expiration-349612
contexts:
- context:
cluster: cert-expiration-349612
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:26:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-349612
name: cert-expiration-349612
current-context: ""
kind: Config
users:
- name: cert-expiration-349612
user:
client-certificate: /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/cert-expiration-349612/client.crt
client-key: /home/jenkins/minikube-integration/21866-5845/.minikube/profiles/cert-expiration-349612/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-615410

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-615410" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-615410"

                                                
                                                
----------------------- debugLogs end: cilium-615410 [took: 3.462801365s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-615410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-615410
--- SKIP: TestNetworkPlugins/group/cilium (3.63s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-460090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-460090
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
Copied to clipboard